2023-07-19 05:14:36,301 DEBUG [main] hbase.HBaseTestingUtility(342): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/a5b3b8b6-cf4e-9609-f882-6066e741168c 2023-07-19 05:14:36,317 INFO [main] hbase.HBaseClassTestRule(94): Test class org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1 timeout: 13 mins 2023-07-19 05:14:36,333 INFO [Time-limited test] hbase.HBaseTestingUtility(1068): Starting up minicluster with option: StartMiniClusterOption{numMasters=1, masterClass=null, numRegionServers=3, rsPorts=, rsClass=null, numDataNodes=3, dataNodeHosts=null, numZkServers=1, createRootDir=false, createWALDir=false} 2023-07-19 05:14:36,333 INFO [Time-limited test] hbase.HBaseZKTestingUtility(82): Created new mini-cluster data directory: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/a5b3b8b6-cf4e-9609-f882-6066e741168c/cluster_bf8aab3d-fc29-3d9a-7b1d-919a6995935e, deleteOnExit=true 2023-07-19 05:14:36,334 INFO [Time-limited test] hbase.HBaseTestingUtility(1082): STARTING DFS 2023-07-19 05:14:36,334 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting test.cache.data to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/a5b3b8b6-cf4e-9609-f882-6066e741168c/test.cache.data in system properties and HBase conf 2023-07-19 05:14:36,335 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting hadoop.tmp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/a5b3b8b6-cf4e-9609-f882-6066e741168c/hadoop.tmp.dir in system properties and HBase conf 2023-07-19 05:14:36,336 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting hadoop.log.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/a5b3b8b6-cf4e-9609-f882-6066e741168c/hadoop.log.dir in system properties and HBase conf 2023-07-19 05:14:36,336 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.local.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/a5b3b8b6-cf4e-9609-f882-6066e741168c/mapreduce.cluster.local.dir in system properties and HBase conf 2023-07-19 05:14:36,336 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.temp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/a5b3b8b6-cf4e-9609-f882-6066e741168c/mapreduce.cluster.temp.dir in system properties and HBase conf 2023-07-19 05:14:36,337 INFO [Time-limited test] hbase.HBaseTestingUtility(759): read short circuit is OFF 2023-07-19 05:14:36,457 WARN [Time-limited test] util.NativeCodeLoader(62): Unable to load native-hadoop library for your platform... using builtin-java classes where applicable 2023-07-19 05:14:36,886 DEBUG [Time-limited test] fs.HFileSystem(308): The file system is not a DistributedFileSystem. Skipping on block location reordering 2023-07-19 05:14:36,890 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.node-labels.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/a5b3b8b6-cf4e-9609-f882-6066e741168c/yarn.node-labels.fs-store.root-dir in system properties and HBase conf 2023-07-19 05:14:36,891 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.node-attribute.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/a5b3b8b6-cf4e-9609-f882-6066e741168c/yarn.node-attribute.fs-store.root-dir in system properties and HBase conf 2023-07-19 05:14:36,891 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.log-dirs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/a5b3b8b6-cf4e-9609-f882-6066e741168c/yarn.nodemanager.log-dirs in system properties and HBase conf 2023-07-19 05:14:36,891 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/a5b3b8b6-cf4e-9609-f882-6066e741168c/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-07-19 05:14:36,892 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.active-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/a5b3b8b6-cf4e-9609-f882-6066e741168c/yarn.timeline-service.entity-group-fs-store.active-dir in system properties and HBase conf 2023-07-19 05:14:36,892 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.done-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/a5b3b8b6-cf4e-9609-f882-6066e741168c/yarn.timeline-service.entity-group-fs-store.done-dir in system properties and HBase conf 2023-07-19 05:14:36,892 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/a5b3b8b6-cf4e-9609-f882-6066e741168c/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-07-19 05:14:36,892 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/a5b3b8b6-cf4e-9609-f882-6066e741168c/dfs.journalnode.edits.dir in system properties and HBase conf 2023-07-19 05:14:36,893 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting dfs.datanode.shared.file.descriptor.paths to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/a5b3b8b6-cf4e-9609-f882-6066e741168c/dfs.datanode.shared.file.descriptor.paths in system properties and HBase conf 2023-07-19 05:14:36,893 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting nfs.dump.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/a5b3b8b6-cf4e-9609-f882-6066e741168c/nfs.dump.dir in system properties and HBase conf 2023-07-19 05:14:36,893 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting java.io.tmpdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/a5b3b8b6-cf4e-9609-f882-6066e741168c/java.io.tmpdir in system properties and HBase conf 2023-07-19 05:14:36,893 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/a5b3b8b6-cf4e-9609-f882-6066e741168c/dfs.journalnode.edits.dir in system properties and HBase conf 2023-07-19 05:14:36,894 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting dfs.provided.aliasmap.inmemory.leveldb.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/a5b3b8b6-cf4e-9609-f882-6066e741168c/dfs.provided.aliasmap.inmemory.leveldb.dir in system properties and HBase conf 2023-07-19 05:14:36,894 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting fs.s3a.committer.staging.tmp.path to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/a5b3b8b6-cf4e-9609-f882-6066e741168c/fs.s3a.committer.staging.tmp.path in system properties and HBase conf Formatting using clusterid: testClusterID 2023-07-19 05:14:37,492 WARN [Time-limited test] conf.Configuration(1701): No unit for dfs.heartbeat.interval(3) assuming SECONDS 2023-07-19 05:14:37,497 WARN [Time-limited test] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-07-19 05:14:37,798 WARN [Time-limited test] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-namenode.properties,hadoop-metrics2.properties 2023-07-19 05:14:37,994 INFO [Time-limited test] log.Slf4jLog(67): Logging to org.slf4j.impl.Reload4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog 2023-07-19 05:14:38,010 WARN [Time-limited test] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-19 05:14:38,042 INFO [Time-limited test] log.Slf4jLog(67): jetty-6.1.26 2023-07-19 05:14:38,073 INFO [Time-limited test] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/hdfs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/a5b3b8b6-cf4e-9609-f882-6066e741168c/java.io.tmpdir/Jetty_localhost_35985_hdfs____.rot5vm/webapp 2023-07-19 05:14:38,230 INFO [Time-limited test] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:35985 2023-07-19 05:14:38,245 WARN [Time-limited test] conf.Configuration(1701): No unit for dfs.heartbeat.interval(3) assuming SECONDS 2023-07-19 05:14:38,245 WARN [Time-limited test] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-07-19 05:14:38,709 WARN [Listener at localhost/34189] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-19 05:14:38,779 WARN [Listener at localhost/34189] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-19 05:14:38,796 WARN [Listener at localhost/34189] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-19 05:14:38,803 INFO [Listener at localhost/34189] log.Slf4jLog(67): jetty-6.1.26 2023-07-19 05:14:38,809 INFO [Listener at localhost/34189] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/a5b3b8b6-cf4e-9609-f882-6066e741168c/java.io.tmpdir/Jetty_localhost_38397_datanode____uurldf/webapp 2023-07-19 05:14:38,910 INFO [Listener at localhost/34189] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:38397 2023-07-19 05:14:39,220 WARN [Listener at localhost/34887] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-19 05:14:39,235 WARN [Listener at localhost/34887] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-19 05:14:39,241 WARN [Listener at localhost/34887] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-19 05:14:39,243 INFO [Listener at localhost/34887] log.Slf4jLog(67): jetty-6.1.26 2023-07-19 05:14:39,249 INFO [Listener at localhost/34887] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/a5b3b8b6-cf4e-9609-f882-6066e741168c/java.io.tmpdir/Jetty_localhost_33317_datanode____.23owv4/webapp 2023-07-19 05:14:39,350 INFO [Listener at localhost/34887] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:33317 2023-07-19 05:14:39,358 WARN [Listener at localhost/36775] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-19 05:14:39,379 WARN [Listener at localhost/36775] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-19 05:14:39,383 WARN [Listener at localhost/36775] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-19 05:14:39,385 INFO [Listener at localhost/36775] log.Slf4jLog(67): jetty-6.1.26 2023-07-19 05:14:39,390 INFO [Listener at localhost/36775] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/a5b3b8b6-cf4e-9609-f882-6066e741168c/java.io.tmpdir/Jetty_localhost_39481_datanode____dsctwu/webapp 2023-07-19 05:14:39,515 INFO [Listener at localhost/36775] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:39481 2023-07-19 05:14:39,526 WARN [Listener at localhost/38799] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-19 05:14:39,813 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xe8b67a5b4b432e10: Processing first storage report for DS-d6cb6fa4-39a5-4030-8421-d6a59591b251 from datanode cc806028-c72f-4b87-ae2e-65a60f2f2519 2023-07-19 05:14:39,814 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xe8b67a5b4b432e10: from storage DS-d6cb6fa4-39a5-4030-8421-d6a59591b251 node DatanodeRegistration(127.0.0.1:41797, datanodeUuid=cc806028-c72f-4b87-ae2e-65a60f2f2519, infoPort=36803, infoSecurePort=0, ipcPort=38799, storageInfo=lv=-57;cid=testClusterID;nsid=696587011;c=1689743677571), blocks: 0, hasStaleStorage: true, processing time: 2 msecs, invalidatedBlocks: 0 2023-07-19 05:14:39,814 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x32dbdf87e072afbb: Processing first storage report for DS-3d30db79-cde8-421e-9ff0-253a3108aa03 from datanode ee2087e9-74e5-4fa9-aee9-e4c7a687ba43 2023-07-19 05:14:39,814 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x32dbdf87e072afbb: from storage DS-3d30db79-cde8-421e-9ff0-253a3108aa03 node DatanodeRegistration(127.0.0.1:35903, datanodeUuid=ee2087e9-74e5-4fa9-aee9-e4c7a687ba43, infoPort=39547, infoSecurePort=0, ipcPort=36775, storageInfo=lv=-57;cid=testClusterID;nsid=696587011;c=1689743677571), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-19 05:14:39,814 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x9c82ac872ff7ae09: Processing first storage report for DS-3575f2bf-6e6e-4bd9-b172-8a5b3c2898e8 from datanode 328eaa33-c8d1-46ed-94e4-2de79e5a106e 2023-07-19 05:14:39,815 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x9c82ac872ff7ae09: from storage DS-3575f2bf-6e6e-4bd9-b172-8a5b3c2898e8 node DatanodeRegistration(127.0.0.1:41221, datanodeUuid=328eaa33-c8d1-46ed-94e4-2de79e5a106e, infoPort=44905, infoSecurePort=0, ipcPort=34887, storageInfo=lv=-57;cid=testClusterID;nsid=696587011;c=1689743677571), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-19 05:14:39,815 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xe8b67a5b4b432e10: Processing first storage report for DS-7124be2a-53b0-4a2d-aa59-b7ce6b599aee from datanode cc806028-c72f-4b87-ae2e-65a60f2f2519 2023-07-19 05:14:39,815 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xe8b67a5b4b432e10: from storage DS-7124be2a-53b0-4a2d-aa59-b7ce6b599aee node DatanodeRegistration(127.0.0.1:41797, datanodeUuid=cc806028-c72f-4b87-ae2e-65a60f2f2519, infoPort=36803, infoSecurePort=0, ipcPort=38799, storageInfo=lv=-57;cid=testClusterID;nsid=696587011;c=1689743677571), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-19 05:14:39,815 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x32dbdf87e072afbb: Processing first storage report for DS-47ea3703-b912-4648-a82f-db836a57d66c from datanode ee2087e9-74e5-4fa9-aee9-e4c7a687ba43 2023-07-19 05:14:39,815 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x32dbdf87e072afbb: from storage DS-47ea3703-b912-4648-a82f-db836a57d66c node DatanodeRegistration(127.0.0.1:35903, datanodeUuid=ee2087e9-74e5-4fa9-aee9-e4c7a687ba43, infoPort=39547, infoSecurePort=0, ipcPort=36775, storageInfo=lv=-57;cid=testClusterID;nsid=696587011;c=1689743677571), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-19 05:14:39,815 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x9c82ac872ff7ae09: Processing first storage report for DS-30ae9a43-e9d2-4fc4-ba47-42d6f151e46d from datanode 328eaa33-c8d1-46ed-94e4-2de79e5a106e 2023-07-19 05:14:39,815 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x9c82ac872ff7ae09: from storage DS-30ae9a43-e9d2-4fc4-ba47-42d6f151e46d node DatanodeRegistration(127.0.0.1:41221, datanodeUuid=328eaa33-c8d1-46ed-94e4-2de79e5a106e, infoPort=44905, infoSecurePort=0, ipcPort=34887, storageInfo=lv=-57;cid=testClusterID;nsid=696587011;c=1689743677571), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-19 05:14:39,974 DEBUG [Listener at localhost/38799] hbase.HBaseTestingUtility(649): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/a5b3b8b6-cf4e-9609-f882-6066e741168c 2023-07-19 05:14:40,057 INFO [Listener at localhost/38799] zookeeper.MiniZooKeeperCluster(258): Started connectionTimeout=30000, dir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/a5b3b8b6-cf4e-9609-f882-6066e741168c/cluster_bf8aab3d-fc29-3d9a-7b1d-919a6995935e/zookeeper_0, clientPort=54772, secureClientPort=-1, dataDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/a5b3b8b6-cf4e-9609-f882-6066e741168c/cluster_bf8aab3d-fc29-3d9a-7b1d-919a6995935e/zookeeper_0/version-2, dataDirSize=424 dataLogDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/a5b3b8b6-cf4e-9609-f882-6066e741168c/cluster_bf8aab3d-fc29-3d9a-7b1d-919a6995935e/zookeeper_0/version-2, dataLogSize=424 tickTime=2000, maxClientCnxns=300, minSessionTimeout=4000, maxSessionTimeout=40000, serverId=0 2023-07-19 05:14:40,071 INFO [Listener at localhost/38799] zookeeper.MiniZooKeeperCluster(283): Started MiniZooKeeperCluster and ran 'stat' on client port=54772 2023-07-19 05:14:40,082 INFO [Listener at localhost/38799] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-19 05:14:40,084 INFO [Listener at localhost/38799] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-19 05:14:40,765 INFO [Listener at localhost/38799] util.FSUtils(471): Created version file at hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd with version=8 2023-07-19 05:14:40,765 INFO [Listener at localhost/38799] hbase.HBaseTestingUtility(1406): Setting hbase.fs.tmp.dir to hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/hbase-staging 2023-07-19 05:14:40,773 DEBUG [Listener at localhost/38799] hbase.LocalHBaseCluster(134): Setting Master Port to random. 2023-07-19 05:14:40,774 DEBUG [Listener at localhost/38799] hbase.LocalHBaseCluster(141): Setting RegionServer Port to random. 2023-07-19 05:14:40,774 DEBUG [Listener at localhost/38799] hbase.LocalHBaseCluster(151): Setting RS InfoServer Port to random. 2023-07-19 05:14:40,774 DEBUG [Listener at localhost/38799] hbase.LocalHBaseCluster(159): Setting Master InfoServer Port to random. 2023-07-19 05:14:41,147 INFO [Listener at localhost/38799] metrics.MetricRegistriesLoader(60): Loaded MetricRegistries class org.apache.hadoop.hbase.metrics.impl.MetricRegistriesImpl 2023-07-19 05:14:41,765 INFO [Listener at localhost/38799] client.ConnectionUtils(127): master/jenkins-hbase4:0 server-side Connection retries=45 2023-07-19 05:14:41,818 INFO [Listener at localhost/38799] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-19 05:14:41,818 INFO [Listener at localhost/38799] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-19 05:14:41,819 INFO [Listener at localhost/38799] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-19 05:14:41,819 INFO [Listener at localhost/38799] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-19 05:14:41,819 INFO [Listener at localhost/38799] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-19 05:14:42,027 INFO [Listener at localhost/38799] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.MasterService, hbase.pb.RegionServerStatusService, hbase.pb.LockService, hbase.pb.HbckService, hbase.pb.ClientMetaService, hbase.pb.ClientService, hbase.pb.AdminService 2023-07-19 05:14:42,105 DEBUG [Listener at localhost/38799] util.ClassSize(228): Using Unsafe to estimate memory layout 2023-07-19 05:14:42,209 INFO [Listener at localhost/38799] ipc.NettyRpcServer(120): Bind to /172.31.14.131:35853 2023-07-19 05:14:42,220 INFO [Listener at localhost/38799] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-19 05:14:42,222 INFO [Listener at localhost/38799] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-19 05:14:42,245 INFO [Listener at localhost/38799] zookeeper.RecoverableZooKeeper(93): Process identifier=master:35853 connecting to ZooKeeper ensemble=127.0.0.1:54772 2023-07-19 05:14:42,294 DEBUG [Listener at localhost/38799-EventThread] zookeeper.ZKWatcher(600): master:358530x0, quorum=127.0.0.1:54772, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-19 05:14:42,297 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): master:35853-0x1017c00e52c0000 connected 2023-07-19 05:14:42,342 DEBUG [Listener at localhost/38799] zookeeper.ZKUtil(164): master:35853-0x1017c00e52c0000, quorum=127.0.0.1:54772, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-19 05:14:42,343 DEBUG [Listener at localhost/38799] zookeeper.ZKUtil(164): master:35853-0x1017c00e52c0000, quorum=127.0.0.1:54772, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-19 05:14:42,347 DEBUG [Listener at localhost/38799] zookeeper.ZKUtil(164): master:35853-0x1017c00e52c0000, quorum=127.0.0.1:54772, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-19 05:14:42,356 DEBUG [Listener at localhost/38799] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=35853 2023-07-19 05:14:42,357 DEBUG [Listener at localhost/38799] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=35853 2023-07-19 05:14:42,358 DEBUG [Listener at localhost/38799] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=35853 2023-07-19 05:14:42,360 DEBUG [Listener at localhost/38799] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=35853 2023-07-19 05:14:42,360 DEBUG [Listener at localhost/38799] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=35853 2023-07-19 05:14:42,397 INFO [Listener at localhost/38799] log.Log(170): Logging initialized @6825ms to org.apache.hbase.thirdparty.org.eclipse.jetty.util.log.Slf4jLog 2023-07-19 05:14:42,549 INFO [Listener at localhost/38799] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-19 05:14:42,550 INFO [Listener at localhost/38799] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-19 05:14:42,551 INFO [Listener at localhost/38799] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-19 05:14:42,554 INFO [Listener at localhost/38799] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context master 2023-07-19 05:14:42,554 INFO [Listener at localhost/38799] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-19 05:14:42,554 INFO [Listener at localhost/38799] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-19 05:14:42,559 INFO [Listener at localhost/38799] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-19 05:14:42,629 INFO [Listener at localhost/38799] http.HttpServer(1146): Jetty bound to port 36473 2023-07-19 05:14:42,630 INFO [Listener at localhost/38799] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-19 05:14:42,669 INFO [Listener at localhost/38799] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-19 05:14:42,673 INFO [Listener at localhost/38799] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@44044822{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/a5b3b8b6-cf4e-9609-f882-6066e741168c/hadoop.log.dir/,AVAILABLE} 2023-07-19 05:14:42,674 INFO [Listener at localhost/38799] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-19 05:14:42,674 INFO [Listener at localhost/38799] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@6d4efddd{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-19 05:14:42,872 INFO [Listener at localhost/38799] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-19 05:14:42,883 INFO [Listener at localhost/38799] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-19 05:14:42,884 INFO [Listener at localhost/38799] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-19 05:14:42,886 INFO [Listener at localhost/38799] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-19 05:14:42,895 INFO [Listener at localhost/38799] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-19 05:14:42,927 INFO [Listener at localhost/38799] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@b4df03b{master,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/a5b3b8b6-cf4e-9609-f882-6066e741168c/java.io.tmpdir/jetty-0_0_0_0-36473-hbase-server-2_4_18-SNAPSHOT_jar-_-any-375610959657464509/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/master} 2023-07-19 05:14:42,940 INFO [Listener at localhost/38799] server.AbstractConnector(333): Started ServerConnector@63cd509d{HTTP/1.1, (http/1.1)}{0.0.0.0:36473} 2023-07-19 05:14:42,940 INFO [Listener at localhost/38799] server.Server(415): Started @7368ms 2023-07-19 05:14:42,944 INFO [Listener at localhost/38799] master.HMaster(444): hbase.rootdir=hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd, hbase.cluster.distributed=false 2023-07-19 05:14:43,029 INFO [Listener at localhost/38799] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-19 05:14:43,029 INFO [Listener at localhost/38799] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-19 05:14:43,030 INFO [Listener at localhost/38799] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-19 05:14:43,030 INFO [Listener at localhost/38799] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-19 05:14:43,030 INFO [Listener at localhost/38799] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-19 05:14:43,030 INFO [Listener at localhost/38799] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-19 05:14:43,038 INFO [Listener at localhost/38799] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-19 05:14:43,042 INFO [Listener at localhost/38799] ipc.NettyRpcServer(120): Bind to /172.31.14.131:45681 2023-07-19 05:14:43,044 INFO [Listener at localhost/38799] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-19 05:14:43,052 DEBUG [Listener at localhost/38799] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-19 05:14:43,053 INFO [Listener at localhost/38799] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-19 05:14:43,055 INFO [Listener at localhost/38799] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-19 05:14:43,057 INFO [Listener at localhost/38799] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:45681 connecting to ZooKeeper ensemble=127.0.0.1:54772 2023-07-19 05:14:43,062 DEBUG [Listener at localhost/38799-EventThread] zookeeper.ZKWatcher(600): regionserver:456810x0, quorum=127.0.0.1:54772, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-19 05:14:43,063 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:45681-0x1017c00e52c0001 connected 2023-07-19 05:14:43,063 DEBUG [Listener at localhost/38799] zookeeper.ZKUtil(164): regionserver:45681-0x1017c00e52c0001, quorum=127.0.0.1:54772, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-19 05:14:43,064 DEBUG [Listener at localhost/38799] zookeeper.ZKUtil(164): regionserver:45681-0x1017c00e52c0001, quorum=127.0.0.1:54772, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-19 05:14:43,065 DEBUG [Listener at localhost/38799] zookeeper.ZKUtil(164): regionserver:45681-0x1017c00e52c0001, quorum=127.0.0.1:54772, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-19 05:14:43,066 DEBUG [Listener at localhost/38799] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=45681 2023-07-19 05:14:43,066 DEBUG [Listener at localhost/38799] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=45681 2023-07-19 05:14:43,067 DEBUG [Listener at localhost/38799] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=45681 2023-07-19 05:14:43,067 DEBUG [Listener at localhost/38799] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=45681 2023-07-19 05:14:43,067 DEBUG [Listener at localhost/38799] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=45681 2023-07-19 05:14:43,070 INFO [Listener at localhost/38799] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-19 05:14:43,070 INFO [Listener at localhost/38799] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-19 05:14:43,070 INFO [Listener at localhost/38799] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-19 05:14:43,071 INFO [Listener at localhost/38799] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-19 05:14:43,071 INFO [Listener at localhost/38799] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-19 05:14:43,071 INFO [Listener at localhost/38799] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-19 05:14:43,072 INFO [Listener at localhost/38799] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-19 05:14:43,074 INFO [Listener at localhost/38799] http.HttpServer(1146): Jetty bound to port 33363 2023-07-19 05:14:43,074 INFO [Listener at localhost/38799] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-19 05:14:43,079 INFO [Listener at localhost/38799] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-19 05:14:43,079 INFO [Listener at localhost/38799] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@705c29b1{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/a5b3b8b6-cf4e-9609-f882-6066e741168c/hadoop.log.dir/,AVAILABLE} 2023-07-19 05:14:43,080 INFO [Listener at localhost/38799] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-19 05:14:43,080 INFO [Listener at localhost/38799] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@b134c3c{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-19 05:14:43,208 INFO [Listener at localhost/38799] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-19 05:14:43,209 INFO [Listener at localhost/38799] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-19 05:14:43,209 INFO [Listener at localhost/38799] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-19 05:14:43,210 INFO [Listener at localhost/38799] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-19 05:14:43,211 INFO [Listener at localhost/38799] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-19 05:14:43,214 INFO [Listener at localhost/38799] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@2068cbfe{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/a5b3b8b6-cf4e-9609-f882-6066e741168c/java.io.tmpdir/jetty-0_0_0_0-33363-hbase-server-2_4_18-SNAPSHOT_jar-_-any-3589860683942918714/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-19 05:14:43,216 INFO [Listener at localhost/38799] server.AbstractConnector(333): Started ServerConnector@2cb4cda5{HTTP/1.1, (http/1.1)}{0.0.0.0:33363} 2023-07-19 05:14:43,216 INFO [Listener at localhost/38799] server.Server(415): Started @7644ms 2023-07-19 05:14:43,229 INFO [Listener at localhost/38799] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-19 05:14:43,229 INFO [Listener at localhost/38799] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-19 05:14:43,230 INFO [Listener at localhost/38799] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-19 05:14:43,230 INFO [Listener at localhost/38799] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-19 05:14:43,230 INFO [Listener at localhost/38799] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-19 05:14:43,231 INFO [Listener at localhost/38799] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-19 05:14:43,231 INFO [Listener at localhost/38799] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-19 05:14:43,232 INFO [Listener at localhost/38799] ipc.NettyRpcServer(120): Bind to /172.31.14.131:41899 2023-07-19 05:14:43,233 INFO [Listener at localhost/38799] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-19 05:14:43,236 DEBUG [Listener at localhost/38799] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-19 05:14:43,236 INFO [Listener at localhost/38799] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-19 05:14:43,238 INFO [Listener at localhost/38799] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-19 05:14:43,239 INFO [Listener at localhost/38799] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:41899 connecting to ZooKeeper ensemble=127.0.0.1:54772 2023-07-19 05:14:43,242 DEBUG [Listener at localhost/38799-EventThread] zookeeper.ZKWatcher(600): regionserver:418990x0, quorum=127.0.0.1:54772, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-19 05:14:43,243 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:41899-0x1017c00e52c0002 connected 2023-07-19 05:14:43,244 DEBUG [Listener at localhost/38799] zookeeper.ZKUtil(164): regionserver:41899-0x1017c00e52c0002, quorum=127.0.0.1:54772, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-19 05:14:43,244 DEBUG [Listener at localhost/38799] zookeeper.ZKUtil(164): regionserver:41899-0x1017c00e52c0002, quorum=127.0.0.1:54772, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-19 05:14:43,245 DEBUG [Listener at localhost/38799] zookeeper.ZKUtil(164): regionserver:41899-0x1017c00e52c0002, quorum=127.0.0.1:54772, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-19 05:14:43,246 DEBUG [Listener at localhost/38799] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=41899 2023-07-19 05:14:43,247 DEBUG [Listener at localhost/38799] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=41899 2023-07-19 05:14:43,250 DEBUG [Listener at localhost/38799] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=41899 2023-07-19 05:14:43,251 DEBUG [Listener at localhost/38799] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=41899 2023-07-19 05:14:43,251 DEBUG [Listener at localhost/38799] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=41899 2023-07-19 05:14:43,254 INFO [Listener at localhost/38799] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-19 05:14:43,254 INFO [Listener at localhost/38799] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-19 05:14:43,254 INFO [Listener at localhost/38799] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-19 05:14:43,255 INFO [Listener at localhost/38799] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-19 05:14:43,255 INFO [Listener at localhost/38799] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-19 05:14:43,255 INFO [Listener at localhost/38799] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-19 05:14:43,255 INFO [Listener at localhost/38799] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-19 05:14:43,256 INFO [Listener at localhost/38799] http.HttpServer(1146): Jetty bound to port 43189 2023-07-19 05:14:43,256 INFO [Listener at localhost/38799] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-19 05:14:43,260 INFO [Listener at localhost/38799] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-19 05:14:43,260 INFO [Listener at localhost/38799] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@34863637{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/a5b3b8b6-cf4e-9609-f882-6066e741168c/hadoop.log.dir/,AVAILABLE} 2023-07-19 05:14:43,261 INFO [Listener at localhost/38799] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-19 05:14:43,261 INFO [Listener at localhost/38799] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@454bace{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-19 05:14:43,415 INFO [Listener at localhost/38799] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-19 05:14:43,416 INFO [Listener at localhost/38799] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-19 05:14:43,417 INFO [Listener at localhost/38799] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-19 05:14:43,417 INFO [Listener at localhost/38799] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-19 05:14:43,419 INFO [Listener at localhost/38799] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-19 05:14:43,421 INFO [Listener at localhost/38799] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@4bc6a9e2{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/a5b3b8b6-cf4e-9609-f882-6066e741168c/java.io.tmpdir/jetty-0_0_0_0-43189-hbase-server-2_4_18-SNAPSHOT_jar-_-any-4970846268250991118/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-19 05:14:43,422 INFO [Listener at localhost/38799] server.AbstractConnector(333): Started ServerConnector@6b7406ba{HTTP/1.1, (http/1.1)}{0.0.0.0:43189} 2023-07-19 05:14:43,422 INFO [Listener at localhost/38799] server.Server(415): Started @7850ms 2023-07-19 05:14:43,436 INFO [Listener at localhost/38799] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-19 05:14:43,436 INFO [Listener at localhost/38799] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-19 05:14:43,436 INFO [Listener at localhost/38799] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-19 05:14:43,437 INFO [Listener at localhost/38799] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-19 05:14:43,437 INFO [Listener at localhost/38799] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-19 05:14:43,437 INFO [Listener at localhost/38799] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-19 05:14:43,437 INFO [Listener at localhost/38799] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-19 05:14:43,439 INFO [Listener at localhost/38799] ipc.NettyRpcServer(120): Bind to /172.31.14.131:41979 2023-07-19 05:14:43,440 INFO [Listener at localhost/38799] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-19 05:14:43,442 DEBUG [Listener at localhost/38799] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-19 05:14:43,443 INFO [Listener at localhost/38799] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-19 05:14:43,445 INFO [Listener at localhost/38799] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-19 05:14:43,447 INFO [Listener at localhost/38799] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:41979 connecting to ZooKeeper ensemble=127.0.0.1:54772 2023-07-19 05:14:43,453 DEBUG [Listener at localhost/38799-EventThread] zookeeper.ZKWatcher(600): regionserver:419790x0, quorum=127.0.0.1:54772, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-19 05:14:43,456 DEBUG [Listener at localhost/38799] zookeeper.ZKUtil(164): regionserver:419790x0, quorum=127.0.0.1:54772, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-19 05:14:43,457 DEBUG [Listener at localhost/38799] zookeeper.ZKUtil(164): regionserver:419790x0, quorum=127.0.0.1:54772, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-19 05:14:43,458 DEBUG [Listener at localhost/38799] zookeeper.ZKUtil(164): regionserver:419790x0, quorum=127.0.0.1:54772, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-19 05:14:43,459 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:41979-0x1017c00e52c0003 connected 2023-07-19 05:14:43,460 DEBUG [Listener at localhost/38799] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=41979 2023-07-19 05:14:43,460 DEBUG [Listener at localhost/38799] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=41979 2023-07-19 05:14:43,464 DEBUG [Listener at localhost/38799] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=41979 2023-07-19 05:14:43,464 DEBUG [Listener at localhost/38799] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=41979 2023-07-19 05:14:43,464 DEBUG [Listener at localhost/38799] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=41979 2023-07-19 05:14:43,468 INFO [Listener at localhost/38799] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-19 05:14:43,468 INFO [Listener at localhost/38799] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-19 05:14:43,468 INFO [Listener at localhost/38799] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-19 05:14:43,469 INFO [Listener at localhost/38799] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-19 05:14:43,469 INFO [Listener at localhost/38799] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-19 05:14:43,469 INFO [Listener at localhost/38799] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-19 05:14:43,469 INFO [Listener at localhost/38799] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-19 05:14:43,471 INFO [Listener at localhost/38799] http.HttpServer(1146): Jetty bound to port 44691 2023-07-19 05:14:43,471 INFO [Listener at localhost/38799] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-19 05:14:43,484 INFO [Listener at localhost/38799] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-19 05:14:43,484 INFO [Listener at localhost/38799] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@709df1b3{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/a5b3b8b6-cf4e-9609-f882-6066e741168c/hadoop.log.dir/,AVAILABLE} 2023-07-19 05:14:43,485 INFO [Listener at localhost/38799] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-19 05:14:43,485 INFO [Listener at localhost/38799] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@5857c9af{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-19 05:14:43,619 INFO [Listener at localhost/38799] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-19 05:14:43,620 INFO [Listener at localhost/38799] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-19 05:14:43,621 INFO [Listener at localhost/38799] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-19 05:14:43,621 INFO [Listener at localhost/38799] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-19 05:14:43,622 INFO [Listener at localhost/38799] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-19 05:14:43,623 INFO [Listener at localhost/38799] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@6c3aed70{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/a5b3b8b6-cf4e-9609-f882-6066e741168c/java.io.tmpdir/jetty-0_0_0_0-44691-hbase-server-2_4_18-SNAPSHOT_jar-_-any-8529592429051941915/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-19 05:14:43,625 INFO [Listener at localhost/38799] server.AbstractConnector(333): Started ServerConnector@6c3f6670{HTTP/1.1, (http/1.1)}{0.0.0.0:44691} 2023-07-19 05:14:43,625 INFO [Listener at localhost/38799] server.Server(415): Started @8053ms 2023-07-19 05:14:43,632 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-19 05:14:43,636 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.AbstractConnector(333): Started ServerConnector@7a62212f{HTTP/1.1, (http/1.1)}{0.0.0.0:35453} 2023-07-19 05:14:43,636 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.Server(415): Started @8064ms 2023-07-19 05:14:43,636 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2168): Adding backup master ZNode /hbase/backup-masters/jenkins-hbase4.apache.org,35853,1689743680958 2023-07-19 05:14:43,646 DEBUG [Listener at localhost/38799-EventThread] zookeeper.ZKWatcher(600): master:35853-0x1017c00e52c0000, quorum=127.0.0.1:54772, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-07-19 05:14:43,648 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:35853-0x1017c00e52c0000, quorum=127.0.0.1:54772, baseZNode=/hbase Set watcher on existing znode=/hbase/backup-masters/jenkins-hbase4.apache.org,35853,1689743680958 2023-07-19 05:14:43,672 DEBUG [Listener at localhost/38799-EventThread] zookeeper.ZKWatcher(600): regionserver:41979-0x1017c00e52c0003, quorum=127.0.0.1:54772, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-19 05:14:43,672 DEBUG [Listener at localhost/38799-EventThread] zookeeper.ZKWatcher(600): regionserver:45681-0x1017c00e52c0001, quorum=127.0.0.1:54772, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-19 05:14:43,672 DEBUG [Listener at localhost/38799-EventThread] zookeeper.ZKWatcher(600): master:35853-0x1017c00e52c0000, quorum=127.0.0.1:54772, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-19 05:14:43,672 DEBUG [Listener at localhost/38799-EventThread] zookeeper.ZKWatcher(600): regionserver:41899-0x1017c00e52c0002, quorum=127.0.0.1:54772, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-19 05:14:43,673 DEBUG [Listener at localhost/38799-EventThread] zookeeper.ZKWatcher(600): master:35853-0x1017c00e52c0000, quorum=127.0.0.1:54772, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-19 05:14:43,675 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:35853-0x1017c00e52c0000, quorum=127.0.0.1:54772, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-19 05:14:43,676 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(227): Deleting ZNode for /hbase/backup-masters/jenkins-hbase4.apache.org,35853,1689743680958 from backup master directory 2023-07-19 05:14:43,676 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): master:35853-0x1017c00e52c0000, quorum=127.0.0.1:54772, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-19 05:14:43,681 DEBUG [Listener at localhost/38799-EventThread] zookeeper.ZKWatcher(600): master:35853-0x1017c00e52c0000, quorum=127.0.0.1:54772, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/backup-masters/jenkins-hbase4.apache.org,35853,1689743680958 2023-07-19 05:14:43,681 DEBUG [Listener at localhost/38799-EventThread] zookeeper.ZKWatcher(600): master:35853-0x1017c00e52c0000, quorum=127.0.0.1:54772, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-07-19 05:14:43,682 WARN [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-19 05:14:43,682 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(237): Registered as active master=jenkins-hbase4.apache.org,35853,1689743680958 2023-07-19 05:14:43,690 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.ChunkCreator(497): Allocating data MemStoreChunkPool with chunk size 2 MB, max count 352, initial count 0 2023-07-19 05:14:43,710 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.ChunkCreator(497): Allocating index MemStoreChunkPool with chunk size 204.80 KB, max count 391, initial count 0 2023-07-19 05:14:43,835 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] util.FSUtils(620): Created cluster ID file at hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/hbase.id with ID: c2744a8f-f59b-4711-ae0f-8614a50d23f0 2023-07-19 05:14:43,906 INFO [master/jenkins-hbase4:0:becomeActiveMaster] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-19 05:14:43,926 DEBUG [Listener at localhost/38799-EventThread] zookeeper.ZKWatcher(600): master:35853-0x1017c00e52c0000, quorum=127.0.0.1:54772, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-19 05:14:44,023 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ReadOnlyZKClient(139): Connect 0x10ac7693 to 127.0.0.1:54772 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-19 05:14:44,055 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@ffffa80, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-19 05:14:44,085 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(309): Create or load local region for table 'master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-19 05:14:44,087 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(132): Injected flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000 2023-07-19 05:14:44,107 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] asyncfs.FanOutOneBlockAsyncDFSOutputHelper(264): ClientProtocol::create wrong number of arguments, should be hadoop 3.2 or below 2023-07-19 05:14:44,107 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] asyncfs.FanOutOneBlockAsyncDFSOutputHelper(270): ClientProtocol::create wrong number of arguments, should be hadoop 2.x 2023-07-19 05:14:44,109 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] asyncfs.FanOutOneBlockAsyncDFSOutputHelper(279): can not find SHOULD_REPLICATE flag, should be hadoop 2.x java.lang.IllegalArgumentException: No enum constant org.apache.hadoop.fs.CreateFlag.SHOULD_REPLICATE at java.lang.Enum.valueOf(Enum.java:238) at org.apache.hadoop.fs.CreateFlag.valueOf(CreateFlag.java:63) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.loadShouldReplicateFlag(FanOutOneBlockAsyncDFSOutputHelper.java:277) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.(FanOutOneBlockAsyncDFSOutputHelper.java:304) at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Class.java:264) at org.apache.hadoop.hbase.wal.AsyncFSWALProvider.load(AsyncFSWALProvider.java:139) at org.apache.hadoop.hbase.wal.WALFactory.getProviderClass(WALFactory.java:135) at org.apache.hadoop.hbase.wal.WALFactory.getProvider(WALFactory.java:175) at org.apache.hadoop.hbase.wal.WALFactory.(WALFactory.java:202) at org.apache.hadoop.hbase.wal.WALFactory.(WALFactory.java:182) at org.apache.hadoop.hbase.master.region.MasterRegion.create(MasterRegion.java:339) at org.apache.hadoop.hbase.master.region.MasterRegionFactory.create(MasterRegionFactory.java:104) at org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:855) at org.apache.hadoop.hbase.master.HMaster.startActiveMasterManager(HMaster.java:2193) at org.apache.hadoop.hbase.master.HMaster.lambda$run$0(HMaster.java:528) at java.lang.Thread.run(Thread.java:750) 2023-07-19 05:14:44,114 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(243): No decryptEncryptedDataEncryptionKey method in DFSClient, should be hadoop version with HDFS-12396 java.lang.NoSuchMethodException: org.apache.hadoop.hdfs.DFSClient.decryptEncryptedDataEncryptionKey(org.apache.hadoop.fs.FileEncryptionInfo) at java.lang.Class.getDeclaredMethod(Class.java:2130) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper.createTransparentCryptoHelperWithoutHDFS12396(FanOutOneBlockAsyncDFSOutputSaslHelper.java:182) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper.createTransparentCryptoHelper(FanOutOneBlockAsyncDFSOutputSaslHelper.java:241) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper.(FanOutOneBlockAsyncDFSOutputSaslHelper.java:252) at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Class.java:264) at org.apache.hadoop.hbase.wal.AsyncFSWALProvider.load(AsyncFSWALProvider.java:140) at org.apache.hadoop.hbase.wal.WALFactory.getProviderClass(WALFactory.java:135) at org.apache.hadoop.hbase.wal.WALFactory.getProvider(WALFactory.java:175) at org.apache.hadoop.hbase.wal.WALFactory.(WALFactory.java:202) at org.apache.hadoop.hbase.wal.WALFactory.(WALFactory.java:182) at org.apache.hadoop.hbase.master.region.MasterRegion.create(MasterRegion.java:339) at org.apache.hadoop.hbase.master.region.MasterRegionFactory.create(MasterRegionFactory.java:104) at org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:855) at org.apache.hadoop.hbase.master.HMaster.startActiveMasterManager(HMaster.java:2193) at org.apache.hadoop.hbase.master.HMaster.lambda$run$0(HMaster.java:528) at java.lang.Thread.run(Thread.java:750) 2023-07-19 05:14:44,115 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-19 05:14:44,161 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7693): Creating {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, under table dir hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/MasterData/data/master/store-tmp 2023-07-19 05:14:44,219 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-19 05:14:44,219 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-07-19 05:14:44,220 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-19 05:14:44,220 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-19 05:14:44,220 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-07-19 05:14:44,220 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-19 05:14:44,220 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-19 05:14:44,220 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-19 05:14:44,222 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(191): WALDir=hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/MasterData/WALs/jenkins-hbase4.apache.org,35853,1689743680958 2023-07-19 05:14:44,254 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C35853%2C1689743680958, suffix=, logDir=hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/MasterData/WALs/jenkins-hbase4.apache.org,35853,1689743680958, archiveDir=hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/MasterData/oldWALs, maxLogs=10 2023-07-19 05:14:44,327 DEBUG [RS-EventLoopGroup-5-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:41221,DS-3575f2bf-6e6e-4bd9-b172-8a5b3c2898e8,DISK] 2023-07-19 05:14:44,327 DEBUG [RS-EventLoopGroup-5-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:41797,DS-d6cb6fa4-39a5-4030-8421-d6a59591b251,DISK] 2023-07-19 05:14:44,327 DEBUG [RS-EventLoopGroup-5-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:35903,DS-3d30db79-cde8-421e-9ff0-253a3108aa03,DISK] 2023-07-19 05:14:44,335 DEBUG [RS-EventLoopGroup-5-3] asyncfs.ProtobufDecoder(123): Hadoop 3.2 and below use unshaded protobuf. java.lang.ClassNotFoundException: org.apache.hadoop.thirdparty.protobuf.MessageLite at java.net.URLClassLoader.findClass(URLClassLoader.java:387) at java.lang.ClassLoader.loadClass(ClassLoader.java:418) at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:352) at java.lang.ClassLoader.loadClass(ClassLoader.java:351) at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Class.java:264) at org.apache.hadoop.hbase.io.asyncfs.ProtobufDecoder.(ProtobufDecoder.java:118) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.processWriteBlockResponse(FanOutOneBlockAsyncDFSOutputHelper.java:340) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.access$100(FanOutOneBlockAsyncDFSOutputHelper.java:112) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper$4.operationComplete(FanOutOneBlockAsyncDFSOutputHelper.java:424) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:590) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:557) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:492) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.addListener(DefaultPromise.java:185) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.initialize(FanOutOneBlockAsyncDFSOutputHelper.java:418) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.access$300(FanOutOneBlockAsyncDFSOutputHelper.java:112) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper$5.operationComplete(FanOutOneBlockAsyncDFSOutputHelper.java:476) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper$5.operationComplete(FanOutOneBlockAsyncDFSOutputHelper.java:471) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:590) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListeners0(DefaultPromise.java:583) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:559) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:492) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.setValue0(DefaultPromise.java:636) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.setSuccess0(DefaultPromise.java:625) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.trySuccess(DefaultPromise.java:105) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPromise.trySuccess(DefaultChannelPromise.java:84) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.fulfillConnectPromise(AbstractEpollChannel.java:653) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.finishConnect(AbstractEpollChannel.java:691) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.epollOutReady(AbstractEpollChannel.java:567) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:489) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:397) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) at java.lang.Thread.run(Thread.java:750) 2023-07-19 05:14:44,449 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/MasterData/WALs/jenkins-hbase4.apache.org,35853,1689743680958/jenkins-hbase4.apache.org%2C35853%2C1689743680958.1689743684264 2023-07-19 05:14:44,451 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:41797,DS-d6cb6fa4-39a5-4030-8421-d6a59591b251,DISK], DatanodeInfoWithStorage[127.0.0.1:41221,DS-3575f2bf-6e6e-4bd9-b172-8a5b3c2898e8,DISK], DatanodeInfoWithStorage[127.0.0.1:35903,DS-3d30db79-cde8-421e-9ff0-253a3108aa03,DISK]] 2023-07-19 05:14:44,452 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7854): Opening region: {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''} 2023-07-19 05:14:44,453 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-19 05:14:44,457 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7894): checking encryption for 1595e783b53d99cd5eef43b6debb2682 2023-07-19 05:14:44,459 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7897): checking classloading for 1595e783b53d99cd5eef43b6debb2682 2023-07-19 05:14:44,564 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family proc of region 1595e783b53d99cd5eef43b6debb2682 2023-07-19 05:14:44,572 DEBUG [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc 2023-07-19 05:14:44,609 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1595e783b53d99cd5eef43b6debb2682 columnFamilyName proc 2023-07-19 05:14:44,624 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(310): Store=1595e783b53d99cd5eef43b6debb2682/proc, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-19 05:14:44,631 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-07-19 05:14:44,633 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-07-19 05:14:44,656 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1055): writing seq id for 1595e783b53d99cd5eef43b6debb2682 2023-07-19 05:14:44,664 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-19 05:14:44,665 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1072): Opened 1595e783b53d99cd5eef43b6debb2682; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10251053920, jitterRate=-0.04529620707035065}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-19 05:14:44,666 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(965): Region open journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-19 05:14:44,667 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(122): Constructor flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000, compactMin=4 2023-07-19 05:14:44,697 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(104): Starting the Region Procedure Store, number threads=5 2023-07-19 05:14:44,697 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(562): Starting 5 core workers (bigger of cpus/4 or 16) with max (burst) worker count=50 2023-07-19 05:14:44,700 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(255): Starting Region Procedure Store lease recovery... 2023-07-19 05:14:44,702 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(582): Recovered RegionProcedureStore lease in 1 msec 2023-07-19 05:14:44,749 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(596): Loaded RegionProcedureStore in 46 msec 2023-07-19 05:14:44,749 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.RemoteProcedureDispatcher(96): Instantiated, coreThreads=3 (allowCoreThreadTimeOut=true), queueMaxSize=32, operationDelay=150 2023-07-19 05:14:44,777 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(253): hbase:meta replica znodes: [] 2023-07-19 05:14:44,783 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.RegionServerTracker(124): Starting RegionServerTracker; 0 have existing ServerCrashProcedures, 0 possibly 'live' servers, and 0 'splitting'. 2023-07-19 05:14:44,794 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:35853-0x1017c00e52c0000, quorum=127.0.0.1:54772, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/balancer 2023-07-19 05:14:44,801 INFO [master/jenkins-hbase4:0:becomeActiveMaster] normalizer.RegionNormalizerWorker(118): Normalizer rate limit set to unlimited 2023-07-19 05:14:44,806 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:35853-0x1017c00e52c0000, quorum=127.0.0.1:54772, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/normalizer 2023-07-19 05:14:44,810 DEBUG [Listener at localhost/38799-EventThread] zookeeper.ZKWatcher(600): master:35853-0x1017c00e52c0000, quorum=127.0.0.1:54772, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-19 05:14:44,812 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:35853-0x1017c00e52c0000, quorum=127.0.0.1:54772, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/split 2023-07-19 05:14:44,812 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:35853-0x1017c00e52c0000, quorum=127.0.0.1:54772, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/merge 2023-07-19 05:14:44,827 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:35853-0x1017c00e52c0000, quorum=127.0.0.1:54772, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/snapshot-cleanup 2023-07-19 05:14:44,833 DEBUG [Listener at localhost/38799-EventThread] zookeeper.ZKWatcher(600): regionserver:45681-0x1017c00e52c0001, quorum=127.0.0.1:54772, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-19 05:14:44,833 DEBUG [Listener at localhost/38799-EventThread] zookeeper.ZKWatcher(600): regionserver:41899-0x1017c00e52c0002, quorum=127.0.0.1:54772, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-19 05:14:44,833 DEBUG [Listener at localhost/38799-EventThread] zookeeper.ZKWatcher(600): master:35853-0x1017c00e52c0000, quorum=127.0.0.1:54772, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-19 05:14:44,833 DEBUG [Listener at localhost/38799-EventThread] zookeeper.ZKWatcher(600): master:35853-0x1017c00e52c0000, quorum=127.0.0.1:54772, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-19 05:14:44,833 DEBUG [Listener at localhost/38799-EventThread] zookeeper.ZKWatcher(600): regionserver:41979-0x1017c00e52c0003, quorum=127.0.0.1:54772, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-19 05:14:44,835 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(744): Active/primary master=jenkins-hbase4.apache.org,35853,1689743680958, sessionid=0x1017c00e52c0000, setting cluster-up flag (Was=false) 2023-07-19 05:14:44,859 DEBUG [Listener at localhost/38799-EventThread] zookeeper.ZKWatcher(600): master:35853-0x1017c00e52c0000, quorum=127.0.0.1:54772, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-19 05:14:44,864 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/flush-table-proc/acquired, /hbase/flush-table-proc/reached, /hbase/flush-table-proc/abort 2023-07-19 05:14:44,866 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,35853,1689743680958 2023-07-19 05:14:44,872 DEBUG [Listener at localhost/38799-EventThread] zookeeper.ZKWatcher(600): master:35853-0x1017c00e52c0000, quorum=127.0.0.1:54772, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-19 05:14:44,879 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/online-snapshot/acquired, /hbase/online-snapshot/reached, /hbase/online-snapshot/abort 2023-07-19 05:14:44,880 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,35853,1689743680958 2023-07-19 05:14:44,884 WARN [master/jenkins-hbase4:0:becomeActiveMaster] snapshot.SnapshotManager(302): Couldn't delete working snapshot directory: hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/.hbase-snapshot/.tmp 2023-07-19 05:14:44,944 INFO [RS:2;jenkins-hbase4:41979] regionserver.HRegionServer(951): ClusterId : c2744a8f-f59b-4711-ae0f-8614a50d23f0 2023-07-19 05:14:44,944 INFO [RS:0;jenkins-hbase4:45681] regionserver.HRegionServer(951): ClusterId : c2744a8f-f59b-4711-ae0f-8614a50d23f0 2023-07-19 05:14:44,945 INFO [RS:1;jenkins-hbase4:41899] regionserver.HRegionServer(951): ClusterId : c2744a8f-f59b-4711-ae0f-8614a50d23f0 2023-07-19 05:14:44,952 DEBUG [RS:1;jenkins-hbase4:41899] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-19 05:14:44,952 DEBUG [RS:2;jenkins-hbase4:41979] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-19 05:14:44,952 DEBUG [RS:0;jenkins-hbase4:45681] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-19 05:14:44,962 DEBUG [RS:1;jenkins-hbase4:41899] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-19 05:14:44,962 DEBUG [RS:2;jenkins-hbase4:41979] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-19 05:14:44,962 DEBUG [RS:0;jenkins-hbase4:45681] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-19 05:14:44,962 DEBUG [RS:1;jenkins-hbase4:41899] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-19 05:14:44,962 DEBUG [RS:2;jenkins-hbase4:41979] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-19 05:14:44,962 DEBUG [RS:0;jenkins-hbase4:45681] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-19 05:14:44,970 DEBUG [RS:1;jenkins-hbase4:41899] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-19 05:14:44,970 DEBUG [RS:2;jenkins-hbase4:41979] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-19 05:14:44,972 DEBUG [RS:1;jenkins-hbase4:41899] zookeeper.ReadOnlyZKClient(139): Connect 0x1ad1dd19 to 127.0.0.1:54772 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-19 05:14:44,973 DEBUG [RS:2;jenkins-hbase4:41979] zookeeper.ReadOnlyZKClient(139): Connect 0x5a5ab720 to 127.0.0.1:54772 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-19 05:14:44,973 DEBUG [RS:0;jenkins-hbase4:45681] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-19 05:14:44,974 DEBUG [RS:0;jenkins-hbase4:45681] zookeeper.ReadOnlyZKClient(139): Connect 0x78dbec02 to 127.0.0.1:54772 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-19 05:14:44,977 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2930): Registered master coprocessor service: service=RSGroupAdminService 2023-07-19 05:14:44,982 DEBUG [RS:2;jenkins-hbase4:41979] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@ba8abc1, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-19 05:14:44,983 DEBUG [RS:1;jenkins-hbase4:41899] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@f4daeed, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-19 05:14:44,983 DEBUG [RS:2;jenkins-hbase4:41979] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@26064b16, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-19 05:14:44,984 DEBUG [RS:1;jenkins-hbase4:41899] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@4689c462, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-19 05:14:44,984 DEBUG [RS:0;jenkins-hbase4:45681] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@1080ef72, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-19 05:14:44,984 DEBUG [RS:0;jenkins-hbase4:45681] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@52cd9370, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-19 05:14:44,991 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] rsgroup.RSGroupInfoManagerImpl(537): Refreshing in Offline mode. 2023-07-19 05:14:44,995 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,35853,1689743680958] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-19 05:14:44,997 INFO [master/jenkins-hbase4:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint loaded, priority=536870911. 2023-07-19 05:14:44,998 INFO [master/jenkins-hbase4:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver loaded, priority=536870912. 2023-07-19 05:14:45,010 DEBUG [RS:0;jenkins-hbase4:45681] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:0;jenkins-hbase4:45681 2023-07-19 05:14:45,014 DEBUG [RS:1;jenkins-hbase4:41899] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:1;jenkins-hbase4:41899 2023-07-19 05:14:45,016 DEBUG [RS:2;jenkins-hbase4:41979] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:2;jenkins-hbase4:41979 2023-07-19 05:14:45,018 INFO [RS:2;jenkins-hbase4:41979] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-19 05:14:45,019 INFO [RS:2;jenkins-hbase4:41979] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-19 05:14:45,018 INFO [RS:0;jenkins-hbase4:45681] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-19 05:14:45,019 INFO [RS:0;jenkins-hbase4:45681] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-19 05:14:45,019 DEBUG [RS:2;jenkins-hbase4:41979] regionserver.HRegionServer(1022): About to register with Master. 2023-07-19 05:14:45,020 DEBUG [RS:0;jenkins-hbase4:45681] regionserver.HRegionServer(1022): About to register with Master. 2023-07-19 05:14:45,021 INFO [RS:1;jenkins-hbase4:41899] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-19 05:14:45,021 INFO [RS:1;jenkins-hbase4:41899] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-19 05:14:45,021 DEBUG [RS:1;jenkins-hbase4:41899] regionserver.HRegionServer(1022): About to register with Master. 2023-07-19 05:14:45,023 INFO [RS:1;jenkins-hbase4:41899] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,35853,1689743680958 with isa=jenkins-hbase4.apache.org/172.31.14.131:41899, startcode=1689743683228 2023-07-19 05:14:45,023 INFO [RS:2;jenkins-hbase4:41979] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,35853,1689743680958 with isa=jenkins-hbase4.apache.org/172.31.14.131:41979, startcode=1689743683435 2023-07-19 05:14:45,023 INFO [RS:0;jenkins-hbase4:45681] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,35853,1689743680958 with isa=jenkins-hbase4.apache.org/172.31.14.131:45681, startcode=1689743683028 2023-07-19 05:14:45,048 DEBUG [RS:2;jenkins-hbase4:41979] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-19 05:14:45,048 DEBUG [RS:1;jenkins-hbase4:41899] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-19 05:14:45,048 DEBUG [RS:0;jenkins-hbase4:45681] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-19 05:14:45,153 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT; InitMetaProcedure table=hbase:meta 2023-07-19 05:14:45,178 INFO [RS-EventLoopGroup-1-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:39653, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.0 (auth:SIMPLE), service=RegionServerStatusService 2023-07-19 05:14:45,179 INFO [RS-EventLoopGroup-1-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:57723, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.2 (auth:SIMPLE), service=RegionServerStatusService 2023-07-19 05:14:45,178 INFO [RS-EventLoopGroup-1-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:42113, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.1 (auth:SIMPLE), service=RegionServerStatusService 2023-07-19 05:14:45,189 DEBUG [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=35853] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.ipc.ServerNotRunningYetException: Server is not running yet at org.apache.hadoop.hbase.master.HMaster.checkServiceStarted(HMaster.java:2832) at org.apache.hadoop.hbase.master.MasterRpcServices.regionServerStartup(MasterRpcServices.java:579) at org.apache.hadoop.hbase.shaded.protobuf.generated.RegionServerStatusProtos$RegionServerStatusService$2.callBlockingMethod(RegionServerStatusProtos.java:15952) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-19 05:14:45,199 DEBUG [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=35853] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.ipc.ServerNotRunningYetException: Server is not running yet at org.apache.hadoop.hbase.master.HMaster.checkServiceStarted(HMaster.java:2832) at org.apache.hadoop.hbase.master.MasterRpcServices.regionServerStartup(MasterRpcServices.java:579) at org.apache.hadoop.hbase.shaded.protobuf.generated.RegionServerStatusProtos$RegionServerStatusService$2.callBlockingMethod(RegionServerStatusProtos.java:15952) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-19 05:14:45,200 DEBUG [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=35853] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.ipc.ServerNotRunningYetException: Server is not running yet at org.apache.hadoop.hbase.master.HMaster.checkServiceStarted(HMaster.java:2832) at org.apache.hadoop.hbase.master.MasterRpcServices.regionServerStartup(MasterRpcServices.java:579) at org.apache.hadoop.hbase.shaded.protobuf.generated.RegionServerStatusProtos$RegionServerStatusService$2.callBlockingMethod(RegionServerStatusProtos.java:15952) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-19 05:14:45,217 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-07-19 05:14:45,221 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-07-19 05:14:45,222 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-07-19 05:14:45,222 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-07-19 05:14:45,224 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_OPEN_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-19 05:14:45,224 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_CLOSE_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-19 05:14:45,224 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-19 05:14:45,224 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_META_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-19 05:14:45,224 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=M_LOG_REPLAY_OPS-master/jenkins-hbase4:0, corePoolSize=10, maxPoolSize=10 2023-07-19 05:14:45,224 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-19 05:14:45,224 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_MERGE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-19 05:14:45,224 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_TABLE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-19 05:14:45,227 DEBUG [RS:1;jenkins-hbase4:41899] regionserver.HRegionServer(2830): Master is not running yet 2023-07-19 05:14:45,227 DEBUG [RS:2;jenkins-hbase4:41979] regionserver.HRegionServer(2830): Master is not running yet 2023-07-19 05:14:45,228 WARN [RS:1;jenkins-hbase4:41899] regionserver.HRegionServer(1030): reportForDuty failed; sleeping 100 ms and then retrying. 2023-07-19 05:14:45,227 DEBUG [RS:0;jenkins-hbase4:45681] regionserver.HRegionServer(2830): Master is not running yet 2023-07-19 05:14:45,228 WARN [RS:2;jenkins-hbase4:41979] regionserver.HRegionServer(1030): reportForDuty failed; sleeping 100 ms and then retrying. 2023-07-19 05:14:45,228 WARN [RS:0;jenkins-hbase4:45681] regionserver.HRegionServer(1030): reportForDuty failed; sleeping 100 ms and then retrying. 2023-07-19 05:14:45,228 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner; timeout=30000, timestamp=1689743715228 2023-07-19 05:14:45,231 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): log_cleaner Cleaner pool size is 1 2023-07-19 05:14:45,235 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveLogCleaner 2023-07-19 05:14:45,241 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT, locked=true; InitMetaProcedure table=hbase:meta 2023-07-19 05:14:45,242 INFO [PEWorker-1] procedure.InitMetaProcedure(71): BOOTSTRAP: creating hbase:meta region 2023-07-19 05:14:45,244 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.replication.master.ReplicationLogCleaner 2023-07-19 05:14:45,244 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreWALCleaner 2023-07-19 05:14:45,245 INFO [PEWorker-1] util.FSTableDescriptors(128): Creating new hbase:meta table descriptor 'hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-07-19 05:14:45,245 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveProcedureWALCleaner 2023-07-19 05:14:45,245 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.LogCleaner(148): Creating 1 old WALs cleaner threads 2023-07-19 05:14:45,246 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=LogsCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-07-19 05:14:45,247 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): hfile_cleaner Cleaner pool size is 2 2023-07-19 05:14:45,250 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreHFileCleaner 2023-07-19 05:14:45,250 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.HFileLinkCleaner 2023-07-19 05:14:45,253 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.snapshot.SnapshotHFileCleaner 2023-07-19 05:14:45,253 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveHFileCleaner 2023-07-19 05:14:45,255 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(242): Starting for large file=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1689743685255,5,FailOnTimeoutGroup] 2023-07-19 05:14:45,256 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(257): Starting for small files=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1689743685256,5,FailOnTimeoutGroup] 2023-07-19 05:14:45,256 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HFileCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-07-19 05:14:45,256 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1461): Reopening regions with very high storeFileRefCount is disabled. Provide threshold value > 0 for hbase.regions.recovery.store.file.ref.count to enable it. 2023-07-19 05:14:45,258 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=ReplicationBarrierCleaner, period=43200000, unit=MILLISECONDS is enabled. 2023-07-19 05:14:45,258 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=SnapshotCleaner, period=1800000, unit=MILLISECONDS is enabled. 2023-07-19 05:14:45,329 INFO [RS:1;jenkins-hbase4:41899] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,35853,1689743680958 with isa=jenkins-hbase4.apache.org/172.31.14.131:41899, startcode=1689743683228 2023-07-19 05:14:45,330 INFO [RS:2;jenkins-hbase4:41979] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,35853,1689743680958 with isa=jenkins-hbase4.apache.org/172.31.14.131:41979, startcode=1689743683435 2023-07-19 05:14:45,330 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-07-19 05:14:45,330 INFO [RS:0;jenkins-hbase4:45681] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,35853,1689743680958 with isa=jenkins-hbase4.apache.org/172.31.14.131:45681, startcode=1689743683028 2023-07-19 05:14:45,331 INFO [PEWorker-1] util.FSTableDescriptors(135): Updated hbase:meta table descriptor to hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-07-19 05:14:45,332 INFO [PEWorker-1] regionserver.HRegion(7675): creating {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd 2023-07-19 05:14:45,336 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=35853] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,41979,1689743683435 2023-07-19 05:14:45,337 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,35853,1689743680958] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-19 05:14:45,338 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,35853,1689743680958] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 1 2023-07-19 05:14:45,343 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=35853] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,45681,1689743683028 2023-07-19 05:14:45,343 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,35853,1689743680958] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-19 05:14:45,343 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,35853,1689743680958] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 2 2023-07-19 05:14:45,344 DEBUG [RS:2;jenkins-hbase4:41979] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd 2023-07-19 05:14:45,344 DEBUG [RS:2;jenkins-hbase4:41979] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:34189 2023-07-19 05:14:45,344 DEBUG [RS:2;jenkins-hbase4:41979] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=36473 2023-07-19 05:14:45,344 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=35853] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,41899,1689743683228 2023-07-19 05:14:45,344 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,35853,1689743680958] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-19 05:14:45,345 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,35853,1689743680958] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 3 2023-07-19 05:14:45,348 DEBUG [RS:0;jenkins-hbase4:45681] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd 2023-07-19 05:14:45,349 DEBUG [RS:0;jenkins-hbase4:45681] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:34189 2023-07-19 05:14:45,350 DEBUG [RS:0;jenkins-hbase4:45681] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=36473 2023-07-19 05:14:45,350 DEBUG [RS:1;jenkins-hbase4:41899] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd 2023-07-19 05:14:45,350 DEBUG [RS:1;jenkins-hbase4:41899] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:34189 2023-07-19 05:14:45,351 DEBUG [RS:1;jenkins-hbase4:41899] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=36473 2023-07-19 05:14:45,352 DEBUG [Listener at localhost/38799-EventThread] zookeeper.ZKWatcher(600): master:35853-0x1017c00e52c0000, quorum=127.0.0.1:54772, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-19 05:14:45,357 DEBUG [RS:2;jenkins-hbase4:41979] zookeeper.ZKUtil(162): regionserver:41979-0x1017c00e52c0003, quorum=127.0.0.1:54772, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,41979,1689743683435 2023-07-19 05:14:45,357 DEBUG [RS:0;jenkins-hbase4:45681] zookeeper.ZKUtil(162): regionserver:45681-0x1017c00e52c0001, quorum=127.0.0.1:54772, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,45681,1689743683028 2023-07-19 05:14:45,357 DEBUG [RS:1;jenkins-hbase4:41899] zookeeper.ZKUtil(162): regionserver:41899-0x1017c00e52c0002, quorum=127.0.0.1:54772, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,41899,1689743683228 2023-07-19 05:14:45,357 WARN [RS:2;jenkins-hbase4:41979] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-19 05:14:45,361 INFO [RS:2;jenkins-hbase4:41979] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-19 05:14:45,362 DEBUG [RS:2;jenkins-hbase4:41979] regionserver.HRegionServer(1948): logDir=hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/WALs/jenkins-hbase4.apache.org,41979,1689743683435 2023-07-19 05:14:45,357 WARN [RS:1;jenkins-hbase4:41899] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-19 05:14:45,357 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,45681,1689743683028] 2023-07-19 05:14:45,362 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,41979,1689743683435] 2023-07-19 05:14:45,362 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,41899,1689743683228] 2023-07-19 05:14:45,357 WARN [RS:0;jenkins-hbase4:45681] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-19 05:14:45,362 INFO [RS:1;jenkins-hbase4:41899] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-19 05:14:45,362 INFO [RS:0;jenkins-hbase4:45681] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-19 05:14:45,363 DEBUG [RS:1;jenkins-hbase4:41899] regionserver.HRegionServer(1948): logDir=hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/WALs/jenkins-hbase4.apache.org,41899,1689743683228 2023-07-19 05:14:45,363 DEBUG [RS:0;jenkins-hbase4:45681] regionserver.HRegionServer(1948): logDir=hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/WALs/jenkins-hbase4.apache.org,45681,1689743683028 2023-07-19 05:14:45,386 DEBUG [RS:0;jenkins-hbase4:45681] zookeeper.ZKUtil(162): regionserver:45681-0x1017c00e52c0001, quorum=127.0.0.1:54772, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,41899,1689743683228 2023-07-19 05:14:45,387 DEBUG [RS:0;jenkins-hbase4:45681] zookeeper.ZKUtil(162): regionserver:45681-0x1017c00e52c0001, quorum=127.0.0.1:54772, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,45681,1689743683028 2023-07-19 05:14:45,387 DEBUG [RS:1;jenkins-hbase4:41899] zookeeper.ZKUtil(162): regionserver:41899-0x1017c00e52c0002, quorum=127.0.0.1:54772, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,41899,1689743683228 2023-07-19 05:14:45,387 DEBUG [RS:2;jenkins-hbase4:41979] zookeeper.ZKUtil(162): regionserver:41979-0x1017c00e52c0003, quorum=127.0.0.1:54772, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,41899,1689743683228 2023-07-19 05:14:45,388 DEBUG [RS:0;jenkins-hbase4:45681] zookeeper.ZKUtil(162): regionserver:45681-0x1017c00e52c0001, quorum=127.0.0.1:54772, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,41979,1689743683435 2023-07-19 05:14:45,388 DEBUG [RS:1;jenkins-hbase4:41899] zookeeper.ZKUtil(162): regionserver:41899-0x1017c00e52c0002, quorum=127.0.0.1:54772, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,45681,1689743683028 2023-07-19 05:14:45,388 DEBUG [PEWorker-1] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-19 05:14:45,388 DEBUG [RS:2;jenkins-hbase4:41979] zookeeper.ZKUtil(162): regionserver:41979-0x1017c00e52c0003, quorum=127.0.0.1:54772, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,45681,1689743683028 2023-07-19 05:14:45,388 DEBUG [RS:1;jenkins-hbase4:41899] zookeeper.ZKUtil(162): regionserver:41899-0x1017c00e52c0002, quorum=127.0.0.1:54772, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,41979,1689743683435 2023-07-19 05:14:45,389 DEBUG [RS:2;jenkins-hbase4:41979] zookeeper.ZKUtil(162): regionserver:41979-0x1017c00e52c0003, quorum=127.0.0.1:54772, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,41979,1689743683435 2023-07-19 05:14:45,391 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-19 05:14:45,393 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/data/hbase/meta/1588230740/info 2023-07-19 05:14:45,394 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-19 05:14:45,395 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-19 05:14:45,395 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-19 05:14:45,398 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/data/hbase/meta/1588230740/rep_barrier 2023-07-19 05:14:45,399 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-19 05:14:45,400 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-19 05:14:45,400 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-19 05:14:45,402 DEBUG [RS:1;jenkins-hbase4:41899] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-19 05:14:45,403 DEBUG [RS:0;jenkins-hbase4:45681] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-19 05:14:45,402 DEBUG [RS:2;jenkins-hbase4:41979] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-19 05:14:45,414 INFO [RS:1;jenkins-hbase4:41899] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-19 05:14:45,414 INFO [RS:0;jenkins-hbase4:45681] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-19 05:14:45,416 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/data/hbase/meta/1588230740/table 2023-07-19 05:14:45,417 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-19 05:14:45,418 INFO [RS:2;jenkins-hbase4:41979] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-19 05:14:45,419 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-19 05:14:45,425 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/data/hbase/meta/1588230740 2023-07-19 05:14:45,426 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/data/hbase/meta/1588230740 2023-07-19 05:14:45,432 DEBUG [PEWorker-1] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-19 05:14:45,435 DEBUG [PEWorker-1] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-19 05:14:45,440 DEBUG [PEWorker-1] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/data/hbase/meta/1588230740/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-19 05:14:45,443 INFO [PEWorker-1] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11058236800, jitterRate=0.029878556728363037}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-19 05:14:45,443 DEBUG [PEWorker-1] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-19 05:14:45,443 DEBUG [PEWorker-1] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-19 05:14:45,443 INFO [PEWorker-1] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-19 05:14:45,444 DEBUG [PEWorker-1] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-19 05:14:45,444 DEBUG [PEWorker-1] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-19 05:14:45,444 DEBUG [PEWorker-1] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-19 05:14:45,447 INFO [PEWorker-1] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-19 05:14:45,448 DEBUG [PEWorker-1] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-19 05:14:45,449 INFO [RS:2;jenkins-hbase4:41979] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-19 05:14:45,449 INFO [RS:1;jenkins-hbase4:41899] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-19 05:14:45,449 INFO [RS:0;jenkins-hbase4:45681] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-19 05:14:45,454 INFO [RS:2;jenkins-hbase4:41979] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-19 05:14:45,454 INFO [RS:2;jenkins-hbase4:41979] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-19 05:14:45,455 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_ASSIGN_META, locked=true; InitMetaProcedure table=hbase:meta 2023-07-19 05:14:45,455 INFO [PEWorker-1] procedure.InitMetaProcedure(103): Going to assign meta 2023-07-19 05:14:45,454 INFO [RS:1;jenkins-hbase4:41899] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-19 05:14:45,454 INFO [RS:0;jenkins-hbase4:45681] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-19 05:14:45,458 INFO [RS:1;jenkins-hbase4:41899] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-19 05:14:45,458 INFO [RS:2;jenkins-hbase4:41979] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-19 05:14:45,458 INFO [RS:0;jenkins-hbase4:45681] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-19 05:14:45,460 INFO [RS:0;jenkins-hbase4:45681] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-19 05:14:45,460 INFO [RS:1;jenkins-hbase4:41899] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-19 05:14:45,467 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN}] 2023-07-19 05:14:45,469 INFO [RS:1;jenkins-hbase4:41899] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-19 05:14:45,469 INFO [RS:0;jenkins-hbase4:45681] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-19 05:14:45,469 INFO [RS:2;jenkins-hbase4:41979] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-19 05:14:45,469 DEBUG [RS:1;jenkins-hbase4:41899] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-19 05:14:45,469 DEBUG [RS:0;jenkins-hbase4:45681] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-19 05:14:45,470 DEBUG [RS:1;jenkins-hbase4:41899] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-19 05:14:45,470 DEBUG [RS:0;jenkins-hbase4:45681] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-19 05:14:45,470 DEBUG [RS:1;jenkins-hbase4:41899] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-19 05:14:45,470 DEBUG [RS:0;jenkins-hbase4:45681] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-19 05:14:45,470 DEBUG [RS:1;jenkins-hbase4:41899] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-19 05:14:45,470 DEBUG [RS:0;jenkins-hbase4:45681] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-19 05:14:45,470 DEBUG [RS:1;jenkins-hbase4:41899] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-19 05:14:45,470 DEBUG [RS:0;jenkins-hbase4:45681] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-19 05:14:45,470 DEBUG [RS:2;jenkins-hbase4:41979] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-19 05:14:45,471 DEBUG [RS:0;jenkins-hbase4:45681] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-19 05:14:45,471 DEBUG [RS:2;jenkins-hbase4:41979] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-19 05:14:45,471 DEBUG [RS:0;jenkins-hbase4:45681] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-19 05:14:45,471 DEBUG [RS:1;jenkins-hbase4:41899] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-19 05:14:45,471 DEBUG [RS:0;jenkins-hbase4:45681] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-19 05:14:45,471 DEBUG [RS:2;jenkins-hbase4:41979] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-19 05:14:45,471 DEBUG [RS:0;jenkins-hbase4:45681] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-19 05:14:45,471 DEBUG [RS:1;jenkins-hbase4:41899] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-19 05:14:45,471 DEBUG [RS:0;jenkins-hbase4:45681] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-19 05:14:45,471 DEBUG [RS:1;jenkins-hbase4:41899] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-19 05:14:45,471 DEBUG [RS:2;jenkins-hbase4:41979] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-19 05:14:45,471 DEBUG [RS:1;jenkins-hbase4:41899] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-19 05:14:45,472 DEBUG [RS:2;jenkins-hbase4:41979] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-19 05:14:45,472 DEBUG [RS:1;jenkins-hbase4:41899] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-19 05:14:45,472 DEBUG [RS:2;jenkins-hbase4:41979] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-19 05:14:45,472 DEBUG [RS:2;jenkins-hbase4:41979] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-19 05:14:45,472 DEBUG [RS:2;jenkins-hbase4:41979] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-19 05:14:45,472 DEBUG [RS:2;jenkins-hbase4:41979] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-19 05:14:45,472 DEBUG [RS:2;jenkins-hbase4:41979] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-19 05:14:45,474 INFO [RS:0;jenkins-hbase4:45681] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-19 05:14:45,475 INFO [RS:1;jenkins-hbase4:41899] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-19 05:14:45,475 INFO [RS:0;jenkins-hbase4:45681] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-19 05:14:45,475 INFO [RS:1;jenkins-hbase4:41899] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-19 05:14:45,475 INFO [RS:0;jenkins-hbase4:45681] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-19 05:14:45,475 INFO [RS:1;jenkins-hbase4:41899] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-19 05:14:45,475 INFO [RS:2;jenkins-hbase4:41979] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-19 05:14:45,475 INFO [RS:2;jenkins-hbase4:41979] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-19 05:14:45,475 INFO [RS:2;jenkins-hbase4:41979] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-19 05:14:45,484 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN 2023-07-19 05:14:45,491 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN; state=OFFLINE, location=null; forceNewPlan=false, retain=false 2023-07-19 05:14:45,493 INFO [RS:1;jenkins-hbase4:41899] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-19 05:14:45,493 INFO [RS:2;jenkins-hbase4:41979] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-19 05:14:45,493 INFO [RS:0;jenkins-hbase4:45681] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-19 05:14:45,498 INFO [RS:2;jenkins-hbase4:41979] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,41979,1689743683435-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-19 05:14:45,498 INFO [RS:0;jenkins-hbase4:45681] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,45681,1689743683028-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-19 05:14:45,498 INFO [RS:1;jenkins-hbase4:41899] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,41899,1689743683228-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-19 05:14:45,518 INFO [RS:0;jenkins-hbase4:45681] regionserver.Replication(203): jenkins-hbase4.apache.org,45681,1689743683028 started 2023-07-19 05:14:45,518 INFO [RS:2;jenkins-hbase4:41979] regionserver.Replication(203): jenkins-hbase4.apache.org,41979,1689743683435 started 2023-07-19 05:14:45,519 INFO [RS:1;jenkins-hbase4:41899] regionserver.Replication(203): jenkins-hbase4.apache.org,41899,1689743683228 started 2023-07-19 05:14:45,519 INFO [RS:0;jenkins-hbase4:45681] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,45681,1689743683028, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:45681, sessionid=0x1017c00e52c0001 2023-07-19 05:14:45,519 INFO [RS:1;jenkins-hbase4:41899] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,41899,1689743683228, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:41899, sessionid=0x1017c00e52c0002 2023-07-19 05:14:45,519 INFO [RS:2;jenkins-hbase4:41979] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,41979,1689743683435, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:41979, sessionid=0x1017c00e52c0003 2023-07-19 05:14:45,519 DEBUG [RS:1;jenkins-hbase4:41899] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-19 05:14:45,519 DEBUG [RS:0;jenkins-hbase4:45681] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-19 05:14:45,519 DEBUG [RS:2;jenkins-hbase4:41979] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-19 05:14:45,519 DEBUG [RS:0;jenkins-hbase4:45681] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,45681,1689743683028 2023-07-19 05:14:45,519 DEBUG [RS:1;jenkins-hbase4:41899] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,41899,1689743683228 2023-07-19 05:14:45,520 DEBUG [RS:0;jenkins-hbase4:45681] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,45681,1689743683028' 2023-07-19 05:14:45,520 DEBUG [RS:2;jenkins-hbase4:41979] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,41979,1689743683435 2023-07-19 05:14:45,520 DEBUG [RS:0;jenkins-hbase4:45681] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-19 05:14:45,520 DEBUG [RS:1;jenkins-hbase4:41899] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,41899,1689743683228' 2023-07-19 05:14:45,521 DEBUG [RS:1;jenkins-hbase4:41899] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-19 05:14:45,521 DEBUG [RS:2;jenkins-hbase4:41979] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,41979,1689743683435' 2023-07-19 05:14:45,521 DEBUG [RS:2;jenkins-hbase4:41979] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-19 05:14:45,521 DEBUG [RS:0;jenkins-hbase4:45681] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-19 05:14:45,521 DEBUG [RS:1;jenkins-hbase4:41899] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-19 05:14:45,521 DEBUG [RS:2;jenkins-hbase4:41979] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-19 05:14:45,522 DEBUG [RS:1;jenkins-hbase4:41899] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-19 05:14:45,522 DEBUG [RS:2;jenkins-hbase4:41979] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-19 05:14:45,522 DEBUG [RS:0;jenkins-hbase4:45681] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-19 05:14:45,522 DEBUG [RS:2;jenkins-hbase4:41979] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-19 05:14:45,522 DEBUG [RS:1;jenkins-hbase4:41899] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-19 05:14:45,522 DEBUG [RS:2;jenkins-hbase4:41979] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,41979,1689743683435 2023-07-19 05:14:45,522 DEBUG [RS:0;jenkins-hbase4:45681] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-19 05:14:45,523 DEBUG [RS:2;jenkins-hbase4:41979] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,41979,1689743683435' 2023-07-19 05:14:45,522 DEBUG [RS:1;jenkins-hbase4:41899] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,41899,1689743683228 2023-07-19 05:14:45,523 DEBUG [RS:2;jenkins-hbase4:41979] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-19 05:14:45,523 DEBUG [RS:0;jenkins-hbase4:45681] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,45681,1689743683028 2023-07-19 05:14:45,523 DEBUG [RS:1;jenkins-hbase4:41899] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,41899,1689743683228' 2023-07-19 05:14:45,523 DEBUG [RS:1;jenkins-hbase4:41899] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-19 05:14:45,523 DEBUG [RS:0;jenkins-hbase4:45681] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,45681,1689743683028' 2023-07-19 05:14:45,523 DEBUG [RS:0;jenkins-hbase4:45681] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-19 05:14:45,524 DEBUG [RS:2;jenkins-hbase4:41979] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-19 05:14:45,524 DEBUG [RS:1;jenkins-hbase4:41899] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-19 05:14:45,524 DEBUG [RS:2;jenkins-hbase4:41979] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-19 05:14:45,524 DEBUG [RS:0;jenkins-hbase4:45681] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-19 05:14:45,524 INFO [RS:2;jenkins-hbase4:41979] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-19 05:14:45,524 DEBUG [RS:1;jenkins-hbase4:41899] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-19 05:14:45,524 INFO [RS:2;jenkins-hbase4:41979] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-19 05:14:45,524 INFO [RS:1;jenkins-hbase4:41899] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-19 05:14:45,525 INFO [RS:1;jenkins-hbase4:41899] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-19 05:14:45,527 DEBUG [RS:0;jenkins-hbase4:45681] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-19 05:14:45,527 INFO [RS:0;jenkins-hbase4:45681] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-19 05:14:45,527 INFO [RS:0;jenkins-hbase4:45681] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-19 05:14:45,637 INFO [RS:2;jenkins-hbase4:41979] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C41979%2C1689743683435, suffix=, logDir=hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/WALs/jenkins-hbase4.apache.org,41979,1689743683435, archiveDir=hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/oldWALs, maxLogs=32 2023-07-19 05:14:45,637 INFO [RS:0;jenkins-hbase4:45681] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C45681%2C1689743683028, suffix=, logDir=hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/WALs/jenkins-hbase4.apache.org,45681,1689743683028, archiveDir=hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/oldWALs, maxLogs=32 2023-07-19 05:14:45,637 INFO [RS:1;jenkins-hbase4:41899] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C41899%2C1689743683228, suffix=, logDir=hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/WALs/jenkins-hbase4.apache.org,41899,1689743683228, archiveDir=hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/oldWALs, maxLogs=32 2023-07-19 05:14:45,643 DEBUG [jenkins-hbase4:35853] assignment.AssignmentManager(2176): Processing assignQueue; systemServersCount=3, allServersCount=3 2023-07-19 05:14:45,664 DEBUG [jenkins-hbase4:35853] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-19 05:14:45,665 DEBUG [jenkins-hbase4:35853] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-19 05:14:45,665 DEBUG [jenkins-hbase4:35853] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-19 05:14:45,665 DEBUG [jenkins-hbase4:35853] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-19 05:14:45,666 DEBUG [jenkins-hbase4:35853] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-19 05:14:45,675 INFO [PEWorker-2] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,41979,1689743683435, state=OPENING 2023-07-19 05:14:45,679 DEBUG [RS-EventLoopGroup-5-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:35903,DS-3d30db79-cde8-421e-9ff0-253a3108aa03,DISK] 2023-07-19 05:14:45,679 DEBUG [RS-EventLoopGroup-5-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:41797,DS-d6cb6fa4-39a5-4030-8421-d6a59591b251,DISK] 2023-07-19 05:14:45,680 DEBUG [RS-EventLoopGroup-5-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:41221,DS-3575f2bf-6e6e-4bd9-b172-8a5b3c2898e8,DISK] 2023-07-19 05:14:45,681 DEBUG [RS-EventLoopGroup-5-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:41797,DS-d6cb6fa4-39a5-4030-8421-d6a59591b251,DISK] 2023-07-19 05:14:45,681 DEBUG [RS-EventLoopGroup-5-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:35903,DS-3d30db79-cde8-421e-9ff0-253a3108aa03,DISK] 2023-07-19 05:14:45,683 DEBUG [RS-EventLoopGroup-5-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:41221,DS-3575f2bf-6e6e-4bd9-b172-8a5b3c2898e8,DISK] 2023-07-19 05:14:45,683 DEBUG [RS-EventLoopGroup-5-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:41221,DS-3575f2bf-6e6e-4bd9-b172-8a5b3c2898e8,DISK] 2023-07-19 05:14:45,683 DEBUG [RS-EventLoopGroup-5-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:35903,DS-3d30db79-cde8-421e-9ff0-253a3108aa03,DISK] 2023-07-19 05:14:45,684 DEBUG [RS-EventLoopGroup-5-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:41797,DS-d6cb6fa4-39a5-4030-8421-d6a59591b251,DISK] 2023-07-19 05:14:45,685 DEBUG [PEWorker-2] zookeeper.MetaTableLocator(240): hbase:meta region location doesn't exist, create it 2023-07-19 05:14:45,690 DEBUG [Listener at localhost/38799-EventThread] zookeeper.ZKWatcher(600): master:35853-0x1017c00e52c0000, quorum=127.0.0.1:54772, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-19 05:14:45,692 INFO [RS:2;jenkins-hbase4:41979] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/WALs/jenkins-hbase4.apache.org,41979,1689743683435/jenkins-hbase4.apache.org%2C41979%2C1689743683435.1689743685642 2023-07-19 05:14:45,693 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-19 05:14:45,693 DEBUG [RS:2;jenkins-hbase4:41979] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:35903,DS-3d30db79-cde8-421e-9ff0-253a3108aa03,DISK], DatanodeInfoWithStorage[127.0.0.1:41221,DS-3575f2bf-6e6e-4bd9-b172-8a5b3c2898e8,DISK], DatanodeInfoWithStorage[127.0.0.1:41797,DS-d6cb6fa4-39a5-4030-8421-d6a59591b251,DISK]] 2023-07-19 05:14:45,699 INFO [RS:0;jenkins-hbase4:45681] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/WALs/jenkins-hbase4.apache.org,45681,1689743683028/jenkins-hbase4.apache.org%2C45681%2C1689743683028.1689743685642 2023-07-19 05:14:45,699 INFO [RS:1;jenkins-hbase4:41899] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/WALs/jenkins-hbase4.apache.org,41899,1689743683228/jenkins-hbase4.apache.org%2C41899%2C1689743683228.1689743685642 2023-07-19 05:14:45,699 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=3, ppid=2, state=RUNNABLE; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,41979,1689743683435}] 2023-07-19 05:14:45,701 DEBUG [RS:0;jenkins-hbase4:45681] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:35903,DS-3d30db79-cde8-421e-9ff0-253a3108aa03,DISK], DatanodeInfoWithStorage[127.0.0.1:41221,DS-3575f2bf-6e6e-4bd9-b172-8a5b3c2898e8,DISK], DatanodeInfoWithStorage[127.0.0.1:41797,DS-d6cb6fa4-39a5-4030-8421-d6a59591b251,DISK]] 2023-07-19 05:14:45,702 DEBUG [RS:1;jenkins-hbase4:41899] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:41797,DS-d6cb6fa4-39a5-4030-8421-d6a59591b251,DISK], DatanodeInfoWithStorage[127.0.0.1:35903,DS-3d30db79-cde8-421e-9ff0-253a3108aa03,DISK], DatanodeInfoWithStorage[127.0.0.1:41221,DS-3575f2bf-6e6e-4bd9-b172-8a5b3c2898e8,DISK]] 2023-07-19 05:14:45,810 WARN [ReadOnlyZKClient-127.0.0.1:54772@0x10ac7693] client.ZKConnectionRegistry(168): Meta region is in state OPENING 2023-07-19 05:14:45,841 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,35853,1689743680958] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-19 05:14:45,844 INFO [RS-EventLoopGroup-5-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:35538, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-19 05:14:45,844 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=41979] ipc.CallRunner(144): callId: 0 service: ClientService methodName: Get size: 88 connection: 172.31.14.131:35538 deadline: 1689743745844, exception=org.apache.hadoop.hbase.NotServingRegionException: hbase:meta,,1 is not online on jenkins-hbase4.apache.org,41979,1689743683435 2023-07-19 05:14:45,885 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,41979,1689743683435 2023-07-19 05:14:45,888 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-19 05:14:45,894 INFO [RS-EventLoopGroup-5-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:35542, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-19 05:14:45,908 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:meta,,1.1588230740 2023-07-19 05:14:45,908 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-19 05:14:45,912 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C41979%2C1689743683435.meta, suffix=.meta, logDir=hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/WALs/jenkins-hbase4.apache.org,41979,1689743683435, archiveDir=hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/oldWALs, maxLogs=32 2023-07-19 05:14:45,937 DEBUG [RS-EventLoopGroup-5-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:35903,DS-3d30db79-cde8-421e-9ff0-253a3108aa03,DISK] 2023-07-19 05:14:45,937 DEBUG [RS-EventLoopGroup-5-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:41797,DS-d6cb6fa4-39a5-4030-8421-d6a59591b251,DISK] 2023-07-19 05:14:45,940 DEBUG [RS-EventLoopGroup-5-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:41221,DS-3575f2bf-6e6e-4bd9-b172-8a5b3c2898e8,DISK] 2023-07-19 05:14:45,951 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/WALs/jenkins-hbase4.apache.org,41979,1689743683435/jenkins-hbase4.apache.org%2C41979%2C1689743683435.meta.1689743685914.meta 2023-07-19 05:14:45,952 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:35903,DS-3d30db79-cde8-421e-9ff0-253a3108aa03,DISK], DatanodeInfoWithStorage[127.0.0.1:41797,DS-d6cb6fa4-39a5-4030-8421-d6a59591b251,DISK], DatanodeInfoWithStorage[127.0.0.1:41221,DS-3575f2bf-6e6e-4bd9-b172-8a5b3c2898e8,DISK]] 2023-07-19 05:14:45,952 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2023-07-19 05:14:45,954 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-19 05:14:45,957 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService 2023-07-19 05:14:45,959 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2023-07-19 05:14:45,966 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table meta 1588230740 2023-07-19 05:14:45,966 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-19 05:14:45,966 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 1588230740 2023-07-19 05:14:45,966 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 1588230740 2023-07-19 05:14:45,969 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-19 05:14:45,971 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/data/hbase/meta/1588230740/info 2023-07-19 05:14:45,971 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/data/hbase/meta/1588230740/info 2023-07-19 05:14:45,972 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-19 05:14:45,973 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-19 05:14:45,973 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-19 05:14:45,974 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/data/hbase/meta/1588230740/rep_barrier 2023-07-19 05:14:45,975 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/data/hbase/meta/1588230740/rep_barrier 2023-07-19 05:14:45,975 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-19 05:14:45,976 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-19 05:14:45,976 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-19 05:14:45,977 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/data/hbase/meta/1588230740/table 2023-07-19 05:14:45,977 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/data/hbase/meta/1588230740/table 2023-07-19 05:14:45,978 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-19 05:14:45,979 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-19 05:14:45,980 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/data/hbase/meta/1588230740 2023-07-19 05:14:45,984 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/data/hbase/meta/1588230740 2023-07-19 05:14:45,988 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-19 05:14:45,991 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-19 05:14:45,993 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9811070080, jitterRate=-0.08627289533615112}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-19 05:14:45,993 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-19 05:14:46,006 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:meta,,1.1588230740, pid=3, masterSystemTime=1689743685882 2023-07-19 05:14:46,034 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:meta,,1.1588230740 2023-07-19 05:14:46,035 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:meta,,1.1588230740 2023-07-19 05:14:46,036 INFO [PEWorker-5] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,41979,1689743683435, state=OPEN 2023-07-19 05:14:46,040 DEBUG [Listener at localhost/38799-EventThread] zookeeper.ZKWatcher(600): master:35853-0x1017c00e52c0000, quorum=127.0.0.1:54772, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-07-19 05:14:46,040 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-19 05:14:46,044 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=3, resume processing ppid=2 2023-07-19 05:14:46,044 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=3, ppid=2, state=SUCCESS; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,41979,1689743683435 in 341 msec 2023-07-19 05:14:46,050 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=2, resume processing ppid=1 2023-07-19 05:14:46,050 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=2, ppid=1, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN in 578 msec 2023-07-19 05:14:46,055 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=1, state=SUCCESS; InitMetaProcedure table=hbase:meta in 1.0480 sec 2023-07-19 05:14:46,056 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(953): Wait for region servers to report in: status=null, state=RUNNING, startTime=1689743686056, completionTime=-1 2023-07-19 05:14:46,056 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ServerManager(821): Finished waiting on RegionServer count=3; waited=0ms, expected min=3 server(s), max=3 server(s), master is running 2023-07-19 05:14:46,056 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1517): Joining cluster... 2023-07-19 05:14:46,107 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1529): Number of RegionServers=3 2023-07-19 05:14:46,107 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$RegionInTransitionChore; timeout=60000, timestamp=1689743746107 2023-07-19 05:14:46,107 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$DeadServerMetricRegionChore; timeout=120000, timestamp=1689743806107 2023-07-19 05:14:46,107 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1536): Joined the cluster in 50 msec 2023-07-19 05:14:46,123 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,35853,1689743680958-ClusterStatusChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-19 05:14:46,123 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,35853,1689743680958-BalancerChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-19 05:14:46,124 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,35853,1689743680958-RegionNormalizerChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-19 05:14:46,126 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=CatalogJanitor-jenkins-hbase4:35853, period=300000, unit=MILLISECONDS is enabled. 2023-07-19 05:14:46,126 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HbckChore-, period=3600000, unit=MILLISECONDS is enabled. 2023-07-19 05:14:46,135 DEBUG [master/jenkins-hbase4:0.Chore.1] janitor.CatalogJanitor(175): 2023-07-19 05:14:46,150 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.TableNamespaceManager(92): Namespace table not found. Creating... 2023-07-19 05:14:46,152 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2148): Client=null/null create 'hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-07-19 05:14:46,163 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:namespace 2023-07-19 05:14:46,169 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_PRE_OPERATION 2023-07-19 05:14:46,173 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-19 05:14:46,190 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/.tmp/data/hbase/namespace/f6f6fceaa7e24dc750aa525625e896fa 2023-07-19 05:14:46,192 DEBUG [HFileArchiver-1] backup.HFileArchiver(153): Directory hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/.tmp/data/hbase/namespace/f6f6fceaa7e24dc750aa525625e896fa empty. 2023-07-19 05:14:46,193 DEBUG [HFileArchiver-1] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/.tmp/data/hbase/namespace/f6f6fceaa7e24dc750aa525625e896fa 2023-07-19 05:14:46,193 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(328): Archived hbase:namespace regions 2023-07-19 05:14:46,233 DEBUG [PEWorker-2] util.FSTableDescriptors(570): Wrote into hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/.tmp/data/hbase/namespace/.tabledesc/.tableinfo.0000000001 2023-07-19 05:14:46,236 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(7675): creating {ENCODED => f6f6fceaa7e24dc750aa525625e896fa, NAME => 'hbase:namespace,,1689743686152.f6f6fceaa7e24dc750aa525625e896fa.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/.tmp 2023-07-19 05:14:46,257 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1689743686152.f6f6fceaa7e24dc750aa525625e896fa.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-19 05:14:46,257 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1604): Closing f6f6fceaa7e24dc750aa525625e896fa, disabling compactions & flushes 2023-07-19 05:14:46,257 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1689743686152.f6f6fceaa7e24dc750aa525625e896fa. 2023-07-19 05:14:46,258 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1689743686152.f6f6fceaa7e24dc750aa525625e896fa. 2023-07-19 05:14:46,258 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1689743686152.f6f6fceaa7e24dc750aa525625e896fa. after waiting 0 ms 2023-07-19 05:14:46,258 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1689743686152.f6f6fceaa7e24dc750aa525625e896fa. 2023-07-19 05:14:46,258 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1838): Closed hbase:namespace,,1689743686152.f6f6fceaa7e24dc750aa525625e896fa. 2023-07-19 05:14:46,258 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1558): Region close journal for f6f6fceaa7e24dc750aa525625e896fa: 2023-07-19 05:14:46,262 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ADD_TO_META 2023-07-19 05:14:46,278 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:namespace,,1689743686152.f6f6fceaa7e24dc750aa525625e896fa.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689743686265"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689743686265"}]},"ts":"1689743686265"} 2023-07-19 05:14:46,307 INFO [PEWorker-2] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-19 05:14:46,308 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-19 05:14:46,313 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689743686309"}]},"ts":"1689743686309"} 2023-07-19 05:14:46,317 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLING in hbase:meta 2023-07-19 05:14:46,321 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-19 05:14:46,322 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-19 05:14:46,322 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-19 05:14:46,322 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-19 05:14:46,322 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-19 05:14:46,324 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=f6f6fceaa7e24dc750aa525625e896fa, ASSIGN}] 2023-07-19 05:14:46,327 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=f6f6fceaa7e24dc750aa525625e896fa, ASSIGN 2023-07-19 05:14:46,329 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:namespace, region=f6f6fceaa7e24dc750aa525625e896fa, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,41899,1689743683228; forceNewPlan=false, retain=false 2023-07-19 05:14:46,364 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,35853,1689743680958] master.HMaster(2148): Client=null/null create 'hbase:rsgroup', {TABLE_ATTRIBUTES => {coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|', METADATA => {'SPLIT_POLICY' => 'org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy'}}}, {NAME => 'm', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-19 05:14:46,367 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,35853,1689743680958] procedure2.ProcedureExecutor(1029): Stored pid=6, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:rsgroup 2023-07-19 05:14:46,375 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_PRE_OPERATION 2023-07-19 05:14:46,377 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-19 05:14:46,381 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/.tmp/data/hbase/rsgroup/36230d99e1f0bd83eb4e5988724a475f 2023-07-19 05:14:46,382 DEBUG [HFileArchiver-2] backup.HFileArchiver(153): Directory hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/.tmp/data/hbase/rsgroup/36230d99e1f0bd83eb4e5988724a475f empty. 2023-07-19 05:14:46,382 DEBUG [HFileArchiver-2] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/.tmp/data/hbase/rsgroup/36230d99e1f0bd83eb4e5988724a475f 2023-07-19 05:14:46,382 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(328): Archived hbase:rsgroup regions 2023-07-19 05:14:46,415 DEBUG [PEWorker-5] util.FSTableDescriptors(570): Wrote into hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/.tmp/data/hbase/rsgroup/.tabledesc/.tableinfo.0000000001 2023-07-19 05:14:46,417 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(7675): creating {ENCODED => 36230d99e1f0bd83eb4e5988724a475f, NAME => 'hbase:rsgroup,,1689743686364.36230d99e1f0bd83eb4e5988724a475f.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:rsgroup', {TABLE_ATTRIBUTES => {coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|', METADATA => {'SPLIT_POLICY' => 'org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy'}}}, {NAME => 'm', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/.tmp 2023-07-19 05:14:46,438 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1689743686364.36230d99e1f0bd83eb4e5988724a475f.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-19 05:14:46,438 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1604): Closing 36230d99e1f0bd83eb4e5988724a475f, disabling compactions & flushes 2023-07-19 05:14:46,439 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1689743686364.36230d99e1f0bd83eb4e5988724a475f. 2023-07-19 05:14:46,439 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1689743686364.36230d99e1f0bd83eb4e5988724a475f. 2023-07-19 05:14:46,439 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1689743686364.36230d99e1f0bd83eb4e5988724a475f. after waiting 0 ms 2023-07-19 05:14:46,439 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1689743686364.36230d99e1f0bd83eb4e5988724a475f. 2023-07-19 05:14:46,439 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1689743686364.36230d99e1f0bd83eb4e5988724a475f. 2023-07-19 05:14:46,439 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1558): Region close journal for 36230d99e1f0bd83eb4e5988724a475f: 2023-07-19 05:14:46,446 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_ADD_TO_META 2023-07-19 05:14:46,448 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:rsgroup,,1689743686364.36230d99e1f0bd83eb4e5988724a475f.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689743686448"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689743686448"}]},"ts":"1689743686448"} 2023-07-19 05:14:46,453 INFO [PEWorker-5] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-19 05:14:46,454 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-19 05:14:46,454 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:rsgroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689743686454"}]},"ts":"1689743686454"} 2023-07-19 05:14:46,457 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=hbase:rsgroup, state=ENABLING in hbase:meta 2023-07-19 05:14:46,463 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-19 05:14:46,464 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-19 05:14:46,464 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-19 05:14:46,464 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-19 05:14:46,464 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-19 05:14:46,464 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=7, ppid=6, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:rsgroup, region=36230d99e1f0bd83eb4e5988724a475f, ASSIGN}] 2023-07-19 05:14:46,468 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=7, ppid=6, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:rsgroup, region=36230d99e1f0bd83eb4e5988724a475f, ASSIGN 2023-07-19 05:14:46,470 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=7, ppid=6, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:rsgroup, region=36230d99e1f0bd83eb4e5988724a475f, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,41899,1689743683228; forceNewPlan=false, retain=false 2023-07-19 05:14:46,471 INFO [jenkins-hbase4:35853] balancer.BaseLoadBalancer(1545): Reassigned 2 regions. 2 retained the pre-restart assignment. 2023-07-19 05:14:46,473 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=7 updating hbase:meta row=36230d99e1f0bd83eb4e5988724a475f, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,41899,1689743683228 2023-07-19 05:14:46,473 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=f6f6fceaa7e24dc750aa525625e896fa, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,41899,1689743683228 2023-07-19 05:14:46,473 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1689743686152.f6f6fceaa7e24dc750aa525625e896fa.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689743686472"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689743686472"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689743686472"}]},"ts":"1689743686472"} 2023-07-19 05:14:46,473 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:rsgroup,,1689743686364.36230d99e1f0bd83eb4e5988724a475f.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689743686472"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689743686472"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689743686472"}]},"ts":"1689743686472"} 2023-07-19 05:14:46,477 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=8, ppid=5, state=RUNNABLE; OpenRegionProcedure f6f6fceaa7e24dc750aa525625e896fa, server=jenkins-hbase4.apache.org,41899,1689743683228}] 2023-07-19 05:14:46,482 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=9, ppid=7, state=RUNNABLE; OpenRegionProcedure 36230d99e1f0bd83eb4e5988724a475f, server=jenkins-hbase4.apache.org,41899,1689743683228}] 2023-07-19 05:14:46,636 DEBUG [RSProcedureDispatcher-pool-1] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,41899,1689743683228 2023-07-19 05:14:46,637 DEBUG [RSProcedureDispatcher-pool-1] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-19 05:14:46,640 INFO [RS-EventLoopGroup-4-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:41484, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-19 05:14:46,647 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:rsgroup,,1689743686364.36230d99e1f0bd83eb4e5988724a475f. 2023-07-19 05:14:46,647 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 36230d99e1f0bd83eb4e5988724a475f, NAME => 'hbase:rsgroup,,1689743686364.36230d99e1f0bd83eb4e5988724a475f.', STARTKEY => '', ENDKEY => ''} 2023-07-19 05:14:46,647 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-19 05:14:46,647 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:rsgroup,,1689743686364.36230d99e1f0bd83eb4e5988724a475f. service=MultiRowMutationService 2023-07-19 05:14:46,648 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:rsgroup successfully. 2023-07-19 05:14:46,649 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table rsgroup 36230d99e1f0bd83eb4e5988724a475f 2023-07-19 05:14:46,649 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1689743686364.36230d99e1f0bd83eb4e5988724a475f.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-19 05:14:46,649 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 36230d99e1f0bd83eb4e5988724a475f 2023-07-19 05:14:46,649 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 36230d99e1f0bd83eb4e5988724a475f 2023-07-19 05:14:46,652 INFO [StoreOpener-36230d99e1f0bd83eb4e5988724a475f-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family m of region 36230d99e1f0bd83eb4e5988724a475f 2023-07-19 05:14:46,654 DEBUG [StoreOpener-36230d99e1f0bd83eb4e5988724a475f-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/data/hbase/rsgroup/36230d99e1f0bd83eb4e5988724a475f/m 2023-07-19 05:14:46,654 DEBUG [StoreOpener-36230d99e1f0bd83eb4e5988724a475f-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/data/hbase/rsgroup/36230d99e1f0bd83eb4e5988724a475f/m 2023-07-19 05:14:46,654 INFO [StoreOpener-36230d99e1f0bd83eb4e5988724a475f-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 36230d99e1f0bd83eb4e5988724a475f columnFamilyName m 2023-07-19 05:14:46,655 INFO [StoreOpener-36230d99e1f0bd83eb4e5988724a475f-1] regionserver.HStore(310): Store=36230d99e1f0bd83eb4e5988724a475f/m, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-19 05:14:46,656 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/data/hbase/rsgroup/36230d99e1f0bd83eb4e5988724a475f 2023-07-19 05:14:46,657 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/data/hbase/rsgroup/36230d99e1f0bd83eb4e5988724a475f 2023-07-19 05:14:46,661 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 36230d99e1f0bd83eb4e5988724a475f 2023-07-19 05:14:46,664 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/data/hbase/rsgroup/36230d99e1f0bd83eb4e5988724a475f/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-19 05:14:46,665 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 36230d99e1f0bd83eb4e5988724a475f; next sequenceid=2; org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy@4b4f56e, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-19 05:14:46,665 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 36230d99e1f0bd83eb4e5988724a475f: 2023-07-19 05:14:46,667 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:rsgroup,,1689743686364.36230d99e1f0bd83eb4e5988724a475f., pid=9, masterSystemTime=1689743686636 2023-07-19 05:14:46,676 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=7 updating hbase:meta row=36230d99e1f0bd83eb4e5988724a475f, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,41899,1689743683228 2023-07-19 05:14:46,676 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:rsgroup,,1689743686364.36230d99e1f0bd83eb4e5988724a475f.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689743686675"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689743686675"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689743686675"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689743686675"}]},"ts":"1689743686675"} 2023-07-19 05:14:46,679 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:rsgroup,,1689743686364.36230d99e1f0bd83eb4e5988724a475f. 2023-07-19 05:14:46,683 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:rsgroup,,1689743686364.36230d99e1f0bd83eb4e5988724a475f. 2023-07-19 05:14:46,683 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:namespace,,1689743686152.f6f6fceaa7e24dc750aa525625e896fa. 2023-07-19 05:14:46,683 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => f6f6fceaa7e24dc750aa525625e896fa, NAME => 'hbase:namespace,,1689743686152.f6f6fceaa7e24dc750aa525625e896fa.', STARTKEY => '', ENDKEY => ''} 2023-07-19 05:14:46,684 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table namespace f6f6fceaa7e24dc750aa525625e896fa 2023-07-19 05:14:46,684 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1689743686152.f6f6fceaa7e24dc750aa525625e896fa.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-19 05:14:46,684 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for f6f6fceaa7e24dc750aa525625e896fa 2023-07-19 05:14:46,684 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for f6f6fceaa7e24dc750aa525625e896fa 2023-07-19 05:14:46,687 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=9, resume processing ppid=7 2023-07-19 05:14:46,691 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=7, resume processing ppid=6 2023-07-19 05:14:46,693 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=9, ppid=7, state=SUCCESS; OpenRegionProcedure 36230d99e1f0bd83eb4e5988724a475f, server=jenkins-hbase4.apache.org,41899,1689743683228 in 197 msec 2023-07-19 05:14:46,693 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=7, ppid=6, state=SUCCESS; TransitRegionStateProcedure table=hbase:rsgroup, region=36230d99e1f0bd83eb4e5988724a475f, ASSIGN in 223 msec 2023-07-19 05:14:46,693 INFO [StoreOpener-f6f6fceaa7e24dc750aa525625e896fa-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region f6f6fceaa7e24dc750aa525625e896fa 2023-07-19 05:14:46,693 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-19 05:14:46,694 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:rsgroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689743686693"}]},"ts":"1689743686693"} 2023-07-19 05:14:46,696 DEBUG [StoreOpener-f6f6fceaa7e24dc750aa525625e896fa-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/data/hbase/namespace/f6f6fceaa7e24dc750aa525625e896fa/info 2023-07-19 05:14:46,696 DEBUG [StoreOpener-f6f6fceaa7e24dc750aa525625e896fa-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/data/hbase/namespace/f6f6fceaa7e24dc750aa525625e896fa/info 2023-07-19 05:14:46,697 INFO [StoreOpener-f6f6fceaa7e24dc750aa525625e896fa-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region f6f6fceaa7e24dc750aa525625e896fa columnFamilyName info 2023-07-19 05:14:46,697 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=hbase:rsgroup, state=ENABLED in hbase:meta 2023-07-19 05:14:46,698 INFO [StoreOpener-f6f6fceaa7e24dc750aa525625e896fa-1] regionserver.HStore(310): Store=f6f6fceaa7e24dc750aa525625e896fa/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-19 05:14:46,699 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/data/hbase/namespace/f6f6fceaa7e24dc750aa525625e896fa 2023-07-19 05:14:46,700 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/data/hbase/namespace/f6f6fceaa7e24dc750aa525625e896fa 2023-07-19 05:14:46,701 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_POST_OPERATION 2023-07-19 05:14:46,705 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for f6f6fceaa7e24dc750aa525625e896fa 2023-07-19 05:14:46,705 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=6, state=SUCCESS; CreateTableProcedure table=hbase:rsgroup in 337 msec 2023-07-19 05:14:46,712 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/data/hbase/namespace/f6f6fceaa7e24dc750aa525625e896fa/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-19 05:14:46,713 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened f6f6fceaa7e24dc750aa525625e896fa; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10031022240, jitterRate=-0.06578825414180756}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-19 05:14:46,713 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for f6f6fceaa7e24dc750aa525625e896fa: 2023-07-19 05:14:46,715 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:namespace,,1689743686152.f6f6fceaa7e24dc750aa525625e896fa., pid=8, masterSystemTime=1689743686636 2023-07-19 05:14:46,719 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:namespace,,1689743686152.f6f6fceaa7e24dc750aa525625e896fa. 2023-07-19 05:14:46,719 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:namespace,,1689743686152.f6f6fceaa7e24dc750aa525625e896fa. 2023-07-19 05:14:46,719 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=f6f6fceaa7e24dc750aa525625e896fa, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,41899,1689743683228 2023-07-19 05:14:46,720 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:namespace,,1689743686152.f6f6fceaa7e24dc750aa525625e896fa.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689743686719"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689743686719"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689743686719"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689743686719"}]},"ts":"1689743686719"} 2023-07-19 05:14:46,728 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=8, resume processing ppid=5 2023-07-19 05:14:46,728 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=8, ppid=5, state=SUCCESS; OpenRegionProcedure f6f6fceaa7e24dc750aa525625e896fa, server=jenkins-hbase4.apache.org,41899,1689743683228 in 246 msec 2023-07-19 05:14:46,732 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=5, resume processing ppid=4 2023-07-19 05:14:46,732 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=5, ppid=4, state=SUCCESS; TransitRegionStateProcedure table=hbase:namespace, region=f6f6fceaa7e24dc750aa525625e896fa, ASSIGN in 404 msec 2023-07-19 05:14:46,733 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-19 05:14:46,736 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689743686736"}]},"ts":"1689743686736"} 2023-07-19 05:14:46,740 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLED in hbase:meta 2023-07-19 05:14:46,743 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_POST_OPERATION 2023-07-19 05:14:46,746 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=4, state=SUCCESS; CreateTableProcedure table=hbase:namespace in 589 msec 2023-07-19 05:14:46,770 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:35853-0x1017c00e52c0000, quorum=127.0.0.1:54772, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/namespace 2023-07-19 05:14:46,773 DEBUG [Listener at localhost/38799-EventThread] zookeeper.ZKWatcher(600): master:35853-0x1017c00e52c0000, quorum=127.0.0.1:54772, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/namespace 2023-07-19 05:14:46,773 DEBUG [Listener at localhost/38799-EventThread] zookeeper.ZKWatcher(600): master:35853-0x1017c00e52c0000, quorum=127.0.0.1:54772, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-19 05:14:46,789 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,35853,1689743680958] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-19 05:14:46,793 INFO [RS-EventLoopGroup-4-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:41500, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-19 05:14:46,799 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,35853,1689743680958] rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker(840): RSGroup table=hbase:rsgroup is online, refreshing cached information 2023-07-19 05:14:46,799 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,35853,1689743680958] rsgroup.RSGroupInfoManagerImpl(534): Refreshing in Online mode. 2023-07-19 05:14:46,815 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=10, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=default 2023-07-19 05:14:46,834 DEBUG [Listener at localhost/38799-EventThread] zookeeper.ZKWatcher(600): master:35853-0x1017c00e52c0000, quorum=127.0.0.1:54772, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-19 05:14:46,843 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=10, state=SUCCESS; CreateNamespaceProcedure, namespace=default in 37 msec 2023-07-19 05:14:46,849 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=11, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=hbase 2023-07-19 05:14:46,863 DEBUG [Listener at localhost/38799-EventThread] zookeeper.ZKWatcher(600): master:35853-0x1017c00e52c0000, quorum=127.0.0.1:54772, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-19 05:14:46,872 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=11, state=SUCCESS; CreateNamespaceProcedure, namespace=hbase in 20 msec 2023-07-19 05:14:46,882 DEBUG [Listener at localhost/38799-EventThread] zookeeper.ZKWatcher(600): master:35853-0x1017c00e52c0000, quorum=127.0.0.1:54772, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-19 05:14:46,883 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,35853,1689743680958] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 05:14:46,886 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,35853,1689743680958] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-19 05:14:46,887 DEBUG [Listener at localhost/38799-EventThread] zookeeper.ZKWatcher(600): master:35853-0x1017c00e52c0000, quorum=127.0.0.1:54772, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/default 2023-07-19 05:14:46,892 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,35853,1689743680958] rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker(823): GroupBasedLoadBalancer is now online 2023-07-19 05:14:46,892 DEBUG [Listener at localhost/38799-EventThread] zookeeper.ZKWatcher(600): master:35853-0x1017c00e52c0000, quorum=127.0.0.1:54772, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/hbase 2023-07-19 05:14:46,893 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1083): Master has completed initialization 3.211sec 2023-07-19 05:14:46,895 INFO [master/jenkins-hbase4:0:becomeActiveMaster] quotas.MasterQuotaManager(97): Quota support disabled 2023-07-19 05:14:46,896 INFO [master/jenkins-hbase4:0:becomeActiveMaster] slowlog.SlowLogMasterService(57): Slow/Large requests logging to system table hbase:slowlog is disabled. Quitting. 2023-07-19 05:14:46,897 INFO [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKWatcher(269): not a secure deployment, proceeding 2023-07-19 05:14:46,899 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,35853,1689743680958-ExpiredMobFileCleanerChore, period=86400, unit=SECONDS is enabled. 2023-07-19 05:14:46,899 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,35853,1689743680958-MobCompactionChore, period=604800, unit=SECONDS is enabled. 2023-07-19 05:14:46,908 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1175): Balancer post startup initialization complete, took 0 seconds 2023-07-19 05:14:46,953 DEBUG [Listener at localhost/38799] zookeeper.ReadOnlyZKClient(139): Connect 0x5c97849b to 127.0.0.1:54772 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-19 05:14:46,960 DEBUG [Listener at localhost/38799] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@39b957cc, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-19 05:14:46,979 DEBUG [hconnection-0x6043b73e-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-19 05:14:46,994 INFO [RS-EventLoopGroup-5-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:35558, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-19 05:14:47,005 INFO [Listener at localhost/38799] hbase.HBaseTestingUtility(1145): Minicluster is up; activeMaster=jenkins-hbase4.apache.org,35853,1689743680958 2023-07-19 05:14:47,006 INFO [Listener at localhost/38799] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-19 05:14:47,016 DEBUG [Listener at localhost/38799] ipc.RpcConnection(124): Using SIMPLE authentication for service=MasterService, sasl=false 2023-07-19 05:14:47,020 INFO [RS-EventLoopGroup-1-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:46730, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=MasterService 2023-07-19 05:14:47,036 DEBUG [Listener at localhost/38799-EventThread] zookeeper.ZKWatcher(600): master:35853-0x1017c00e52c0000, quorum=127.0.0.1:54772, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/balancer 2023-07-19 05:14:47,036 DEBUG [Listener at localhost/38799-EventThread] zookeeper.ZKWatcher(600): master:35853-0x1017c00e52c0000, quorum=127.0.0.1:54772, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-19 05:14:47,037 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.MasterRpcServices(492): Client=jenkins//172.31.14.131 set balanceSwitch=false 2023-07-19 05:14:47,042 DEBUG [Listener at localhost/38799] zookeeper.ReadOnlyZKClient(139): Connect 0x1a021b1a to 127.0.0.1:54772 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-19 05:14:47,047 DEBUG [Listener at localhost/38799] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@1004d3a7, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-19 05:14:47,048 INFO [Listener at localhost/38799] zookeeper.RecoverableZooKeeper(93): Process identifier=VerifyingRSGroupAdminClient connecting to ZooKeeper ensemble=127.0.0.1:54772 2023-07-19 05:14:47,051 DEBUG [Listener at localhost/38799-EventThread] zookeeper.ZKWatcher(600): VerifyingRSGroupAdminClient0x0, quorum=127.0.0.1:54772, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-19 05:14:47,051 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): VerifyingRSGroupAdminClient-0x1017c00e52c000a connected 2023-07-19 05:14:47,085 INFO [Listener at localhost/38799] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testTableMoveTruncateAndDrop Thread=419, OpenFileDescriptor=680, MaxFileDescriptor=60000, SystemLoadAverage=316, ProcessCount=173, AvailableMemoryMB=3723 2023-07-19 05:14:47,087 INFO [Listener at localhost/38799] rsgroup.TestRSGroupsBase(132): testTableMoveTruncateAndDrop 2023-07-19 05:14:47,111 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-19 05:14:47,112 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-19 05:14:47,161 INFO [Listener at localhost/38799] rsgroup.TestRSGroupsBase(152): Restoring servers: 1 2023-07-19 05:14:47,176 INFO [Listener at localhost/38799] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-19 05:14:47,176 INFO [Listener at localhost/38799] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-19 05:14:47,176 INFO [Listener at localhost/38799] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-19 05:14:47,176 INFO [Listener at localhost/38799] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-19 05:14:47,176 INFO [Listener at localhost/38799] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-19 05:14:47,177 INFO [Listener at localhost/38799] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-19 05:14:47,177 INFO [Listener at localhost/38799] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-19 05:14:47,181 INFO [Listener at localhost/38799] ipc.NettyRpcServer(120): Bind to /172.31.14.131:43237 2023-07-19 05:14:47,181 INFO [Listener at localhost/38799] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-19 05:14:47,182 DEBUG [Listener at localhost/38799] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-19 05:14:47,183 INFO [Listener at localhost/38799] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-19 05:14:47,187 INFO [Listener at localhost/38799] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-19 05:14:47,189 INFO [Listener at localhost/38799] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:43237 connecting to ZooKeeper ensemble=127.0.0.1:54772 2023-07-19 05:14:47,193 DEBUG [Listener at localhost/38799-EventThread] zookeeper.ZKWatcher(600): regionserver:432370x0, quorum=127.0.0.1:54772, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-19 05:14:47,195 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:43237-0x1017c00e52c000b connected 2023-07-19 05:14:47,195 DEBUG [Listener at localhost/38799] zookeeper.ZKUtil(162): regionserver:43237-0x1017c00e52c000b, quorum=127.0.0.1:54772, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-19 05:14:47,196 DEBUG [Listener at localhost/38799] zookeeper.ZKUtil(162): regionserver:43237-0x1017c00e52c000b, quorum=127.0.0.1:54772, baseZNode=/hbase Set watcher on existing znode=/hbase/running 2023-07-19 05:14:47,197 DEBUG [Listener at localhost/38799] zookeeper.ZKUtil(164): regionserver:43237-0x1017c00e52c000b, quorum=127.0.0.1:54772, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-19 05:14:47,197 DEBUG [Listener at localhost/38799] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=43237 2023-07-19 05:14:47,198 DEBUG [Listener at localhost/38799] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=43237 2023-07-19 05:14:47,198 DEBUG [Listener at localhost/38799] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=43237 2023-07-19 05:14:47,202 DEBUG [Listener at localhost/38799] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=43237 2023-07-19 05:14:47,203 DEBUG [Listener at localhost/38799] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=43237 2023-07-19 05:14:47,205 INFO [Listener at localhost/38799] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-19 05:14:47,205 INFO [Listener at localhost/38799] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-19 05:14:47,205 INFO [Listener at localhost/38799] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-19 05:14:47,205 INFO [Listener at localhost/38799] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-19 05:14:47,205 INFO [Listener at localhost/38799] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-19 05:14:47,205 INFO [Listener at localhost/38799] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-19 05:14:47,206 INFO [Listener at localhost/38799] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-19 05:14:47,206 INFO [Listener at localhost/38799] http.HttpServer(1146): Jetty bound to port 35859 2023-07-19 05:14:47,206 INFO [Listener at localhost/38799] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-19 05:14:47,210 INFO [Listener at localhost/38799] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-19 05:14:47,210 INFO [Listener at localhost/38799] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@14ac5f55{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/a5b3b8b6-cf4e-9609-f882-6066e741168c/hadoop.log.dir/,AVAILABLE} 2023-07-19 05:14:47,211 INFO [Listener at localhost/38799] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-19 05:14:47,211 INFO [Listener at localhost/38799] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@34f7812e{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-19 05:14:47,344 INFO [Listener at localhost/38799] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-19 05:14:47,345 INFO [Listener at localhost/38799] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-19 05:14:47,345 INFO [Listener at localhost/38799] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-19 05:14:47,345 INFO [Listener at localhost/38799] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-19 05:14:47,347 INFO [Listener at localhost/38799] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-19 05:14:47,348 INFO [Listener at localhost/38799] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@289fa920{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/a5b3b8b6-cf4e-9609-f882-6066e741168c/java.io.tmpdir/jetty-0_0_0_0-35859-hbase-server-2_4_18-SNAPSHOT_jar-_-any-6353944086393901563/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-19 05:14:47,351 INFO [Listener at localhost/38799] server.AbstractConnector(333): Started ServerConnector@35efd609{HTTP/1.1, (http/1.1)}{0.0.0.0:35859} 2023-07-19 05:14:47,351 INFO [Listener at localhost/38799] server.Server(415): Started @11779ms 2023-07-19 05:14:47,356 INFO [RS:3;jenkins-hbase4:43237] regionserver.HRegionServer(951): ClusterId : c2744a8f-f59b-4711-ae0f-8614a50d23f0 2023-07-19 05:14:47,357 DEBUG [RS:3;jenkins-hbase4:43237] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-19 05:14:47,360 DEBUG [RS:3;jenkins-hbase4:43237] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-19 05:14:47,360 DEBUG [RS:3;jenkins-hbase4:43237] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-19 05:14:47,362 DEBUG [RS:3;jenkins-hbase4:43237] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-19 05:14:47,364 DEBUG [RS:3;jenkins-hbase4:43237] zookeeper.ReadOnlyZKClient(139): Connect 0x09433678 to 127.0.0.1:54772 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-19 05:14:47,374 DEBUG [RS:3;jenkins-hbase4:43237] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@784d920, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-19 05:14:47,374 DEBUG [RS:3;jenkins-hbase4:43237] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@3bb5d08, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-19 05:14:47,385 DEBUG [RS:3;jenkins-hbase4:43237] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:3;jenkins-hbase4:43237 2023-07-19 05:14:47,385 INFO [RS:3;jenkins-hbase4:43237] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-19 05:14:47,385 INFO [RS:3;jenkins-hbase4:43237] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-19 05:14:47,385 DEBUG [RS:3;jenkins-hbase4:43237] regionserver.HRegionServer(1022): About to register with Master. 2023-07-19 05:14:47,386 INFO [RS:3;jenkins-hbase4:43237] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,35853,1689743680958 with isa=jenkins-hbase4.apache.org/172.31.14.131:43237, startcode=1689743687175 2023-07-19 05:14:47,386 DEBUG [RS:3;jenkins-hbase4:43237] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-19 05:14:47,396 INFO [RS-EventLoopGroup-1-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:44823, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.3 (auth:SIMPLE), service=RegionServerStatusService 2023-07-19 05:14:47,396 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=35853] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,43237,1689743687175 2023-07-19 05:14:47,397 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,35853,1689743680958] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-19 05:14:47,397 DEBUG [RS:3;jenkins-hbase4:43237] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd 2023-07-19 05:14:47,398 DEBUG [RS:3;jenkins-hbase4:43237] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:34189 2023-07-19 05:14:47,398 DEBUG [RS:3;jenkins-hbase4:43237] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=36473 2023-07-19 05:14:47,403 DEBUG [Listener at localhost/38799-EventThread] zookeeper.ZKWatcher(600): regionserver:41899-0x1017c00e52c0002, quorum=127.0.0.1:54772, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-19 05:14:47,403 DEBUG [Listener at localhost/38799-EventThread] zookeeper.ZKWatcher(600): regionserver:45681-0x1017c00e52c0001, quorum=127.0.0.1:54772, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-19 05:14:47,403 DEBUG [Listener at localhost/38799-EventThread] zookeeper.ZKWatcher(600): regionserver:41979-0x1017c00e52c0003, quorum=127.0.0.1:54772, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-19 05:14:47,403 DEBUG [Listener at localhost/38799-EventThread] zookeeper.ZKWatcher(600): master:35853-0x1017c00e52c0000, quorum=127.0.0.1:54772, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-19 05:14:47,404 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,35853,1689743680958] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 05:14:47,405 DEBUG [RS:3;jenkins-hbase4:43237] zookeeper.ZKUtil(162): regionserver:43237-0x1017c00e52c000b, quorum=127.0.0.1:54772, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,43237,1689743687175 2023-07-19 05:14:47,405 WARN [RS:3;jenkins-hbase4:43237] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-19 05:14:47,405 INFO [RS:3;jenkins-hbase4:43237] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-19 05:14:47,405 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,35853,1689743680958] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-19 05:14:47,405 DEBUG [RS:3;jenkins-hbase4:43237] regionserver.HRegionServer(1948): logDir=hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/WALs/jenkins-hbase4.apache.org,43237,1689743687175 2023-07-19 05:14:47,405 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,43237,1689743687175] 2023-07-19 05:14:47,406 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:41979-0x1017c00e52c0003, quorum=127.0.0.1:54772, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,41899,1689743683228 2023-07-19 05:14:47,406 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:45681-0x1017c00e52c0001, quorum=127.0.0.1:54772, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,41899,1689743683228 2023-07-19 05:14:47,406 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:41899-0x1017c00e52c0002, quorum=127.0.0.1:54772, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,41899,1689743683228 2023-07-19 05:14:47,419 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:41899-0x1017c00e52c0002, quorum=127.0.0.1:54772, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,45681,1689743683028 2023-07-19 05:14:47,419 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,35853,1689743680958] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 4 2023-07-19 05:14:47,419 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:41979-0x1017c00e52c0003, quorum=127.0.0.1:54772, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,45681,1689743683028 2023-07-19 05:14:47,419 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:45681-0x1017c00e52c0001, quorum=127.0.0.1:54772, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,45681,1689743683028 2023-07-19 05:14:47,420 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:41899-0x1017c00e52c0002, quorum=127.0.0.1:54772, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,43237,1689743687175 2023-07-19 05:14:47,420 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:45681-0x1017c00e52c0001, quorum=127.0.0.1:54772, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,43237,1689743687175 2023-07-19 05:14:47,421 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:45681-0x1017c00e52c0001, quorum=127.0.0.1:54772, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,41979,1689743683435 2023-07-19 05:14:47,422 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:41899-0x1017c00e52c0002, quorum=127.0.0.1:54772, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,41979,1689743683435 2023-07-19 05:14:47,423 DEBUG [RS:3;jenkins-hbase4:43237] zookeeper.ZKUtil(162): regionserver:43237-0x1017c00e52c000b, quorum=127.0.0.1:54772, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,41899,1689743683228 2023-07-19 05:14:47,424 DEBUG [RS:3;jenkins-hbase4:43237] zookeeper.ZKUtil(162): regionserver:43237-0x1017c00e52c000b, quorum=127.0.0.1:54772, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,45681,1689743683028 2023-07-19 05:14:47,424 DEBUG [RS:3;jenkins-hbase4:43237] zookeeper.ZKUtil(162): regionserver:43237-0x1017c00e52c000b, quorum=127.0.0.1:54772, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,43237,1689743687175 2023-07-19 05:14:47,426 DEBUG [RS:3;jenkins-hbase4:43237] zookeeper.ZKUtil(162): regionserver:43237-0x1017c00e52c000b, quorum=127.0.0.1:54772, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,41979,1689743683435 2023-07-19 05:14:47,428 DEBUG [RS:3;jenkins-hbase4:43237] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-19 05:14:47,428 INFO [RS:3;jenkins-hbase4:43237] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-19 05:14:47,431 INFO [RS:3;jenkins-hbase4:43237] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-19 05:14:47,431 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:41979-0x1017c00e52c0003, quorum=127.0.0.1:54772, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,43237,1689743687175 2023-07-19 05:14:47,432 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:41979-0x1017c00e52c0003, quorum=127.0.0.1:54772, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,41979,1689743683435 2023-07-19 05:14:47,436 INFO [RS:3;jenkins-hbase4:43237] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-19 05:14:47,436 INFO [RS:3;jenkins-hbase4:43237] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-19 05:14:47,436 INFO [RS:3;jenkins-hbase4:43237] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-19 05:14:47,438 INFO [RS:3;jenkins-hbase4:43237] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-19 05:14:47,438 DEBUG [RS:3;jenkins-hbase4:43237] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-19 05:14:47,438 DEBUG [RS:3;jenkins-hbase4:43237] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-19 05:14:47,439 DEBUG [RS:3;jenkins-hbase4:43237] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-19 05:14:47,439 DEBUG [RS:3;jenkins-hbase4:43237] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-19 05:14:47,439 DEBUG [RS:3;jenkins-hbase4:43237] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-19 05:14:47,439 DEBUG [RS:3;jenkins-hbase4:43237] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-19 05:14:47,439 DEBUG [RS:3;jenkins-hbase4:43237] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-19 05:14:47,439 DEBUG [RS:3;jenkins-hbase4:43237] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-19 05:14:47,439 DEBUG [RS:3;jenkins-hbase4:43237] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-19 05:14:47,439 DEBUG [RS:3;jenkins-hbase4:43237] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-19 05:14:47,443 INFO [RS:3;jenkins-hbase4:43237] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-19 05:14:47,443 INFO [RS:3;jenkins-hbase4:43237] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-19 05:14:47,443 INFO [RS:3;jenkins-hbase4:43237] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-19 05:14:47,459 INFO [RS:3;jenkins-hbase4:43237] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-19 05:14:47,459 INFO [RS:3;jenkins-hbase4:43237] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,43237,1689743687175-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-19 05:14:47,471 INFO [RS:3;jenkins-hbase4:43237] regionserver.Replication(203): jenkins-hbase4.apache.org,43237,1689743687175 started 2023-07-19 05:14:47,471 INFO [RS:3;jenkins-hbase4:43237] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,43237,1689743687175, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:43237, sessionid=0x1017c00e52c000b 2023-07-19 05:14:47,471 DEBUG [RS:3;jenkins-hbase4:43237] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-19 05:14:47,471 DEBUG [RS:3;jenkins-hbase4:43237] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,43237,1689743687175 2023-07-19 05:14:47,471 DEBUG [RS:3;jenkins-hbase4:43237] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,43237,1689743687175' 2023-07-19 05:14:47,471 DEBUG [RS:3;jenkins-hbase4:43237] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-19 05:14:47,472 DEBUG [RS:3;jenkins-hbase4:43237] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-19 05:14:47,472 DEBUG [RS:3;jenkins-hbase4:43237] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-19 05:14:47,472 DEBUG [RS:3;jenkins-hbase4:43237] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-19 05:14:47,472 DEBUG [RS:3;jenkins-hbase4:43237] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,43237,1689743687175 2023-07-19 05:14:47,472 DEBUG [RS:3;jenkins-hbase4:43237] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,43237,1689743687175' 2023-07-19 05:14:47,472 DEBUG [RS:3;jenkins-hbase4:43237] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-19 05:14:47,473 DEBUG [RS:3;jenkins-hbase4:43237] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-19 05:14:47,473 DEBUG [RS:3;jenkins-hbase4:43237] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-19 05:14:47,473 INFO [RS:3;jenkins-hbase4:43237] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-19 05:14:47,473 INFO [RS:3;jenkins-hbase4:43237] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-19 05:14:47,477 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-19 05:14:47,481 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 05:14:47,481 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-19 05:14:47,483 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-19 05:14:47,487 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-19 05:14:47,490 DEBUG [hconnection-0x5b070797-metaLookup-shared--pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-19 05:14:47,494 INFO [RS-EventLoopGroup-5-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:35572, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-19 05:14:47,498 DEBUG [hconnection-0x5b070797-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-19 05:14:47,502 INFO [RS-EventLoopGroup-4-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:41510, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-19 05:14:47,505 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-19 05:14:47,505 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-19 05:14:47,515 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:35853] to rsgroup master 2023-07-19 05:14:47,515 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:35853 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-19 05:14:47,515 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] ipc.CallRunner(144): callId: 20 service: MasterService methodName: ExecMasterService size: 118 connection: 172.31.14.131:46730 deadline: 1689744887514, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:35853 is either offline or it does not exist. 2023-07-19 05:14:47,516 WARN [Listener at localhost/38799] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:35853 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:35853 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-19 05:14:47,518 INFO [Listener at localhost/38799] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-19 05:14:47,519 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-19 05:14:47,520 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-19 05:14:47,520 INFO [Listener at localhost/38799] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:41899, jenkins-hbase4.apache.org:41979, jenkins-hbase4.apache.org:43237, jenkins-hbase4.apache.org:45681], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-19 05:14:47,528 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-19 05:14:47,528 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-19 05:14:47,529 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-19 05:14:47,530 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-19 05:14:47,531 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup Group_testTableMoveTruncateAndDrop_2021517430 2023-07-19 05:14:47,535 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testTableMoveTruncateAndDrop_2021517430 2023-07-19 05:14:47,538 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 05:14:47,539 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-19 05:14:47,539 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-19 05:14:47,543 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-19 05:14:47,548 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-19 05:14:47,548 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-19 05:14:47,552 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:41899, jenkins-hbase4.apache.org:41979] to rsgroup Group_testTableMoveTruncateAndDrop_2021517430 2023-07-19 05:14:47,555 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 05:14:47,556 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testTableMoveTruncateAndDrop_2021517430 2023-07-19 05:14:47,556 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-19 05:14:47,557 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-19 05:14:47,560 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminServer(238): Moving server region 36230d99e1f0bd83eb4e5988724a475f, which do not belong to RSGroup Group_testTableMoveTruncateAndDrop_2021517430 2023-07-19 05:14:47,560 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-19 05:14:47,561 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-19 05:14:47,561 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-19 05:14:47,561 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-19 05:14:47,561 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-19 05:14:47,563 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] procedure2.ProcedureExecutor(1029): Stored pid=12, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:rsgroup, region=36230d99e1f0bd83eb4e5988724a475f, REOPEN/MOVE 2023-07-19 05:14:47,564 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=12, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:rsgroup, region=36230d99e1f0bd83eb4e5988724a475f, REOPEN/MOVE 2023-07-19 05:14:47,564 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminServer(238): Moving server region f6f6fceaa7e24dc750aa525625e896fa, which do not belong to RSGroup Group_testTableMoveTruncateAndDrop_2021517430 2023-07-19 05:14:47,565 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-19 05:14:47,565 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-19 05:14:47,565 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-19 05:14:47,565 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-19 05:14:47,565 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-19 05:14:47,566 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=12 updating hbase:meta row=36230d99e1f0bd83eb4e5988724a475f, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,41899,1689743683228 2023-07-19 05:14:47,566 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:rsgroup,,1689743686364.36230d99e1f0bd83eb4e5988724a475f.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689743687565"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689743687565"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689743687565"}]},"ts":"1689743687565"} 2023-07-19 05:14:47,566 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] procedure2.ProcedureExecutor(1029): Stored pid=13, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:namespace, region=f6f6fceaa7e24dc750aa525625e896fa, REOPEN/MOVE 2023-07-19 05:14:47,567 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminServer(238): Moving server region 1588230740, which do not belong to RSGroup Group_testTableMoveTruncateAndDrop_2021517430 2023-07-19 05:14:47,567 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=13, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:namespace, region=f6f6fceaa7e24dc750aa525625e896fa, REOPEN/MOVE 2023-07-19 05:14:47,567 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-19 05:14:47,568 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-19 05:14:47,568 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-19 05:14:47,568 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-19 05:14:47,568 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-19 05:14:47,569 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=13 updating hbase:meta row=f6f6fceaa7e24dc750aa525625e896fa, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,41899,1689743683228 2023-07-19 05:14:47,569 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1689743686152.f6f6fceaa7e24dc750aa525625e896fa.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689743687569"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689743687569"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689743687569"}]},"ts":"1689743687569"} 2023-07-19 05:14:47,570 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=15, ppid=12, state=RUNNABLE; CloseRegionProcedure 36230d99e1f0bd83eb4e5988724a475f, server=jenkins-hbase4.apache.org,41899,1689743683228}] 2023-07-19 05:14:47,570 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] procedure2.ProcedureExecutor(1029): Stored pid=14, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, REOPEN/MOVE 2023-07-19 05:14:47,571 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=14, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, REOPEN/MOVE 2023-07-19 05:14:47,571 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminServer(286): Moving 3 region(s) to group default, current retry=0 2023-07-19 05:14:47,573 INFO [PEWorker-1] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,41979,1689743683435, state=CLOSING 2023-07-19 05:14:47,576 DEBUG [Listener at localhost/38799-EventThread] zookeeper.ZKWatcher(600): master:35853-0x1017c00e52c0000, quorum=127.0.0.1:54772, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-07-19 05:14:47,576 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-19 05:14:47,576 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=16, ppid=14, state=RUNNABLE; CloseRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,41979,1689743683435}] 2023-07-19 05:14:47,577 INFO [RS:3;jenkins-hbase4:43237] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C43237%2C1689743687175, suffix=, logDir=hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/WALs/jenkins-hbase4.apache.org,43237,1689743687175, archiveDir=hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/oldWALs, maxLogs=32 2023-07-19 05:14:47,578 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=17, ppid=13, state=RUNNABLE; CloseRegionProcedure f6f6fceaa7e24dc750aa525625e896fa, server=jenkins-hbase4.apache.org,41899,1689743683228}] 2023-07-19 05:14:47,583 DEBUG [PEWorker-2] procedure2.ProcedureExecutor(1400): LOCK_EVENT_WAIT pid=17, ppid=13, state=RUNNABLE; CloseRegionProcedure f6f6fceaa7e24dc750aa525625e896fa, server=jenkins-hbase4.apache.org,41899,1689743683228 2023-07-19 05:14:47,593 DEBUG [PEWorker-5] procedure2.ProcedureExecutor(1400): LOCK_EVENT_WAIT pid=15, ppid=12, state=RUNNABLE; CloseRegionProcedure 36230d99e1f0bd83eb4e5988724a475f, server=jenkins-hbase4.apache.org,41899,1689743683228 2023-07-19 05:14:47,621 DEBUG [RS-EventLoopGroup-7-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:35903,DS-3d30db79-cde8-421e-9ff0-253a3108aa03,DISK] 2023-07-19 05:14:47,621 DEBUG [RS-EventLoopGroup-7-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:41797,DS-d6cb6fa4-39a5-4030-8421-d6a59591b251,DISK] 2023-07-19 05:14:47,622 DEBUG [RS-EventLoopGroup-7-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:41221,DS-3575f2bf-6e6e-4bd9-b172-8a5b3c2898e8,DISK] 2023-07-19 05:14:47,627 INFO [RS:3;jenkins-hbase4:43237] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/WALs/jenkins-hbase4.apache.org,43237,1689743687175/jenkins-hbase4.apache.org%2C43237%2C1689743687175.1689743687578 2023-07-19 05:14:47,628 DEBUG [RS:3;jenkins-hbase4:43237] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:35903,DS-3d30db79-cde8-421e-9ff0-253a3108aa03,DISK], DatanodeInfoWithStorage[127.0.0.1:41221,DS-3575f2bf-6e6e-4bd9-b172-8a5b3c2898e8,DISK], DatanodeInfoWithStorage[127.0.0.1:41797,DS-d6cb6fa4-39a5-4030-8421-d6a59591b251,DISK]] 2023-07-19 05:14:47,752 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 1588230740 2023-07-19 05:14:47,753 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-19 05:14:47,753 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-19 05:14:47,753 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-19 05:14:47,753 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-19 05:14:47,753 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-19 05:14:47,755 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=3.21 KB heapSize=6.16 KB 2023-07-19 05:14:47,851 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=3.03 KB at sequenceid=16 (bloomFilter=false), to=hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/data/hbase/meta/1588230740/.tmp/info/e96ac5aae79e482a80164d991b174e16 2023-07-19 05:14:47,948 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=184 B at sequenceid=16 (bloomFilter=false), to=hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/data/hbase/meta/1588230740/.tmp/table/394934de6a1e467c9f092ab34cfbbdad 2023-07-19 05:14:47,963 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/data/hbase/meta/1588230740/.tmp/info/e96ac5aae79e482a80164d991b174e16 as hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/data/hbase/meta/1588230740/info/e96ac5aae79e482a80164d991b174e16 2023-07-19 05:14:47,974 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/data/hbase/meta/1588230740/info/e96ac5aae79e482a80164d991b174e16, entries=22, sequenceid=16, filesize=7.3 K 2023-07-19 05:14:47,977 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/data/hbase/meta/1588230740/.tmp/table/394934de6a1e467c9f092ab34cfbbdad as hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/data/hbase/meta/1588230740/table/394934de6a1e467c9f092ab34cfbbdad 2023-07-19 05:14:47,989 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/data/hbase/meta/1588230740/table/394934de6a1e467c9f092ab34cfbbdad, entries=4, sequenceid=16, filesize=4.8 K 2023-07-19 05:14:47,993 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~3.21 KB/3290, heapSize ~5.88 KB/6024, currentSize=0 B/0 for 1588230740 in 239ms, sequenceid=16, compaction requested=false 2023-07-19 05:14:47,995 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:meta' 2023-07-19 05:14:48,012 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/data/hbase/meta/1588230740/recovered.edits/19.seqid, newMaxSeqId=19, maxSeqId=1 2023-07-19 05:14:48,013 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-19 05:14:48,014 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-19 05:14:48,014 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-19 05:14:48,014 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 1588230740 move to jenkins-hbase4.apache.org,45681,1689743683028 record at close sequenceid=16 2023-07-19 05:14:48,016 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 1588230740 2023-07-19 05:14:48,017 WARN [PEWorker-2] zookeeper.MetaTableLocator(225): Tried to set null ServerName in hbase:meta; skipping -- ServerName required 2023-07-19 05:14:48,022 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=16, resume processing ppid=14 2023-07-19 05:14:48,022 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=16, ppid=14, state=SUCCESS; CloseRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,41979,1689743683435 in 441 msec 2023-07-19 05:14:48,023 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=14, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:meta, region=1588230740, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,45681,1689743683028; forceNewPlan=false, retain=false 2023-07-19 05:14:48,173 INFO [jenkins-hbase4:35853] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-19 05:14:48,174 INFO [PEWorker-4] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,45681,1689743683028, state=OPENING 2023-07-19 05:14:48,177 DEBUG [Listener at localhost/38799-EventThread] zookeeper.ZKWatcher(600): master:35853-0x1017c00e52c0000, quorum=127.0.0.1:54772, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-07-19 05:14:48,177 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-19 05:14:48,177 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=18, ppid=14, state=RUNNABLE; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,45681,1689743683028}] 2023-07-19 05:14:48,332 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,45681,1689743683028 2023-07-19 05:14:48,332 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-19 05:14:48,336 INFO [RS-EventLoopGroup-3-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:44152, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-19 05:14:48,342 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:meta,,1.1588230740 2023-07-19 05:14:48,342 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-19 05:14:48,345 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C45681%2C1689743683028.meta, suffix=.meta, logDir=hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/WALs/jenkins-hbase4.apache.org,45681,1689743683028, archiveDir=hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/oldWALs, maxLogs=32 2023-07-19 05:14:48,370 DEBUG [RS-EventLoopGroup-7-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:41221,DS-3575f2bf-6e6e-4bd9-b172-8a5b3c2898e8,DISK] 2023-07-19 05:14:48,370 DEBUG [RS-EventLoopGroup-7-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:35903,DS-3d30db79-cde8-421e-9ff0-253a3108aa03,DISK] 2023-07-19 05:14:48,383 DEBUG [RS-EventLoopGroup-7-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:41797,DS-d6cb6fa4-39a5-4030-8421-d6a59591b251,DISK] 2023-07-19 05:14:48,391 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/WALs/jenkins-hbase4.apache.org,45681,1689743683028/jenkins-hbase4.apache.org%2C45681%2C1689743683028.meta.1689743688347.meta 2023-07-19 05:14:48,395 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:41221,DS-3575f2bf-6e6e-4bd9-b172-8a5b3c2898e8,DISK], DatanodeInfoWithStorage[127.0.0.1:35903,DS-3d30db79-cde8-421e-9ff0-253a3108aa03,DISK], DatanodeInfoWithStorage[127.0.0.1:41797,DS-d6cb6fa4-39a5-4030-8421-d6a59591b251,DISK]] 2023-07-19 05:14:48,395 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2023-07-19 05:14:48,395 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-19 05:14:48,395 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService 2023-07-19 05:14:48,396 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2023-07-19 05:14:48,396 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table meta 1588230740 2023-07-19 05:14:48,396 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-19 05:14:48,396 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 1588230740 2023-07-19 05:14:48,396 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 1588230740 2023-07-19 05:14:48,405 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-19 05:14:48,407 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/data/hbase/meta/1588230740/info 2023-07-19 05:14:48,407 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/data/hbase/meta/1588230740/info 2023-07-19 05:14:48,408 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-19 05:14:48,423 DEBUG [StoreOpener-1588230740-1] regionserver.HStore(539): loaded hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/data/hbase/meta/1588230740/info/e96ac5aae79e482a80164d991b174e16 2023-07-19 05:14:48,424 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-19 05:14:48,424 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-19 05:14:48,425 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/data/hbase/meta/1588230740/rep_barrier 2023-07-19 05:14:48,425 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/data/hbase/meta/1588230740/rep_barrier 2023-07-19 05:14:48,426 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-19 05:14:48,427 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-19 05:14:48,427 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-19 05:14:48,428 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/data/hbase/meta/1588230740/table 2023-07-19 05:14:48,429 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/data/hbase/meta/1588230740/table 2023-07-19 05:14:48,429 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-19 05:14:48,444 DEBUG [StoreOpener-1588230740-1] regionserver.HStore(539): loaded hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/data/hbase/meta/1588230740/table/394934de6a1e467c9f092ab34cfbbdad 2023-07-19 05:14:48,445 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-19 05:14:48,446 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/data/hbase/meta/1588230740 2023-07-19 05:14:48,448 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/data/hbase/meta/1588230740 2023-07-19 05:14:48,451 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-19 05:14:48,453 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-19 05:14:48,454 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=20; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9549972000, jitterRate=-0.11058954894542694}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-19 05:14:48,454 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-19 05:14:48,459 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:meta,,1.1588230740, pid=18, masterSystemTime=1689743688332 2023-07-19 05:14:48,464 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:meta,,1.1588230740 2023-07-19 05:14:48,465 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:meta,,1.1588230740 2023-07-19 05:14:48,465 INFO [PEWorker-5] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,45681,1689743683028, state=OPEN 2023-07-19 05:14:48,467 DEBUG [Listener at localhost/38799-EventThread] zookeeper.ZKWatcher(600): master:35853-0x1017c00e52c0000, quorum=127.0.0.1:54772, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-07-19 05:14:48,468 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-19 05:14:48,473 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=18, resume processing ppid=14 2023-07-19 05:14:48,473 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=18, ppid=14, state=SUCCESS; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,45681,1689743683028 in 290 msec 2023-07-19 05:14:48,478 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=14, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, REOPEN/MOVE in 905 msec 2023-07-19 05:14:48,572 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] procedure.ProcedureSyncWait(216): waitFor pid=12 2023-07-19 05:14:48,622 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close f6f6fceaa7e24dc750aa525625e896fa 2023-07-19 05:14:48,623 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing f6f6fceaa7e24dc750aa525625e896fa, disabling compactions & flushes 2023-07-19 05:14:48,623 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1689743686152.f6f6fceaa7e24dc750aa525625e896fa. 2023-07-19 05:14:48,623 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1689743686152.f6f6fceaa7e24dc750aa525625e896fa. 2023-07-19 05:14:48,623 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1689743686152.f6f6fceaa7e24dc750aa525625e896fa. after waiting 0 ms 2023-07-19 05:14:48,623 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1689743686152.f6f6fceaa7e24dc750aa525625e896fa. 2023-07-19 05:14:48,623 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing f6f6fceaa7e24dc750aa525625e896fa 1/1 column families, dataSize=78 B heapSize=488 B 2023-07-19 05:14:48,657 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=78 B at sequenceid=6 (bloomFilter=true), to=hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/data/hbase/namespace/f6f6fceaa7e24dc750aa525625e896fa/.tmp/info/b0b2cea6d8eb47f8900a07a5b8fd22cf 2023-07-19 05:14:48,666 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/data/hbase/namespace/f6f6fceaa7e24dc750aa525625e896fa/.tmp/info/b0b2cea6d8eb47f8900a07a5b8fd22cf as hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/data/hbase/namespace/f6f6fceaa7e24dc750aa525625e896fa/info/b0b2cea6d8eb47f8900a07a5b8fd22cf 2023-07-19 05:14:48,674 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/data/hbase/namespace/f6f6fceaa7e24dc750aa525625e896fa/info/b0b2cea6d8eb47f8900a07a5b8fd22cf, entries=2, sequenceid=6, filesize=4.8 K 2023-07-19 05:14:48,676 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~78 B/78, heapSize ~472 B/472, currentSize=0 B/0 for f6f6fceaa7e24dc750aa525625e896fa in 53ms, sequenceid=6, compaction requested=false 2023-07-19 05:14:48,676 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:namespace' 2023-07-19 05:14:48,695 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/data/hbase/namespace/f6f6fceaa7e24dc750aa525625e896fa/recovered.edits/9.seqid, newMaxSeqId=9, maxSeqId=1 2023-07-19 05:14:48,696 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:namespace,,1689743686152.f6f6fceaa7e24dc750aa525625e896fa. 2023-07-19 05:14:48,696 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for f6f6fceaa7e24dc750aa525625e896fa: 2023-07-19 05:14:48,696 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding f6f6fceaa7e24dc750aa525625e896fa move to jenkins-hbase4.apache.org,43237,1689743687175 record at close sequenceid=6 2023-07-19 05:14:48,699 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed f6f6fceaa7e24dc750aa525625e896fa 2023-07-19 05:14:48,699 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 36230d99e1f0bd83eb4e5988724a475f 2023-07-19 05:14:48,700 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=13 updating hbase:meta row=f6f6fceaa7e24dc750aa525625e896fa, regionState=CLOSED 2023-07-19 05:14:48,701 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"hbase:namespace,,1689743686152.f6f6fceaa7e24dc750aa525625e896fa.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689743688700"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689743688700"}]},"ts":"1689743688700"} 2023-07-19 05:14:48,701 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 36230d99e1f0bd83eb4e5988724a475f, disabling compactions & flushes 2023-07-19 05:14:48,701 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1689743686364.36230d99e1f0bd83eb4e5988724a475f. 2023-07-19 05:14:48,701 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1689743686364.36230d99e1f0bd83eb4e5988724a475f. 2023-07-19 05:14:48,701 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1689743686364.36230d99e1f0bd83eb4e5988724a475f. after waiting 0 ms 2023-07-19 05:14:48,701 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1689743686364.36230d99e1f0bd83eb4e5988724a475f. 2023-07-19 05:14:48,701 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 36230d99e1f0bd83eb4e5988724a475f 1/1 column families, dataSize=1.38 KB heapSize=2.37 KB 2023-07-19 05:14:48,701 DEBUG [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=41979] ipc.CallRunner(144): callId: 41 service: ClientService methodName: Mutate size: 217 connection: 172.31.14.131:35538 deadline: 1689743748701, exception=org.apache.hadoop.hbase.exceptions.RegionMovedException: Region moved to: hostname=jenkins-hbase4.apache.org port=45681 startCode=1689743683028. As of locationSeqNum=16. 2023-07-19 05:14:48,727 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=1.38 KB at sequenceid=9 (bloomFilter=true), to=hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/data/hbase/rsgroup/36230d99e1f0bd83eb4e5988724a475f/.tmp/m/e49e6b0051004bdd8dfbbf994f21c581 2023-07-19 05:14:48,737 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/data/hbase/rsgroup/36230d99e1f0bd83eb4e5988724a475f/.tmp/m/e49e6b0051004bdd8dfbbf994f21c581 as hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/data/hbase/rsgroup/36230d99e1f0bd83eb4e5988724a475f/m/e49e6b0051004bdd8dfbbf994f21c581 2023-07-19 05:14:48,745 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/data/hbase/rsgroup/36230d99e1f0bd83eb4e5988724a475f/m/e49e6b0051004bdd8dfbbf994f21c581, entries=3, sequenceid=9, filesize=5.2 K 2023-07-19 05:14:48,746 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~1.38 KB/1418, heapSize ~2.35 KB/2408, currentSize=0 B/0 for 36230d99e1f0bd83eb4e5988724a475f in 45ms, sequenceid=9, compaction requested=false 2023-07-19 05:14:48,747 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:rsgroup' 2023-07-19 05:14:48,759 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/data/hbase/rsgroup/36230d99e1f0bd83eb4e5988724a475f/recovered.edits/12.seqid, newMaxSeqId=12, maxSeqId=1 2023-07-19 05:14:48,760 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-19 05:14:48,761 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1689743686364.36230d99e1f0bd83eb4e5988724a475f. 2023-07-19 05:14:48,761 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 36230d99e1f0bd83eb4e5988724a475f: 2023-07-19 05:14:48,761 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 36230d99e1f0bd83eb4e5988724a475f move to jenkins-hbase4.apache.org,45681,1689743683028 record at close sequenceid=9 2023-07-19 05:14:48,764 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 36230d99e1f0bd83eb4e5988724a475f 2023-07-19 05:14:48,765 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=12 updating hbase:meta row=36230d99e1f0bd83eb4e5988724a475f, regionState=CLOSED 2023-07-19 05:14:48,765 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"hbase:rsgroup,,1689743686364.36230d99e1f0bd83eb4e5988724a475f.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689743688765"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689743688765"}]},"ts":"1689743688765"} 2023-07-19 05:14:48,765 DEBUG [PEWorker-2] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-19 05:14:48,769 INFO [RS-EventLoopGroup-3-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:44158, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-19 05:14:48,774 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=15, resume processing ppid=12 2023-07-19 05:14:48,774 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=15, ppid=12, state=SUCCESS; CloseRegionProcedure 36230d99e1f0bd83eb4e5988724a475f, server=jenkins-hbase4.apache.org,41899,1689743683228 in 1.2010 sec 2023-07-19 05:14:48,775 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:rsgroup, region=36230d99e1f0bd83eb4e5988724a475f, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,45681,1689743683028; forceNewPlan=false, retain=false 2023-07-19 05:14:48,813 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=17, resume processing ppid=13 2023-07-19 05:14:48,813 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=17, ppid=13, state=SUCCESS; CloseRegionProcedure f6f6fceaa7e24dc750aa525625e896fa, server=jenkins-hbase4.apache.org,41899,1689743683228 in 1.2290 sec 2023-07-19 05:14:48,815 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=13, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:namespace, region=f6f6fceaa7e24dc750aa525625e896fa, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,43237,1689743687175; forceNewPlan=false, retain=false 2023-07-19 05:14:48,815 INFO [jenkins-hbase4:35853] balancer.BaseLoadBalancer(1545): Reassigned 2 regions. 2 retained the pre-restart assignment. 2023-07-19 05:14:48,816 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=12 updating hbase:meta row=36230d99e1f0bd83eb4e5988724a475f, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,45681,1689743683028 2023-07-19 05:14:48,816 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:rsgroup,,1689743686364.36230d99e1f0bd83eb4e5988724a475f.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689743688816"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689743688816"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689743688816"}]},"ts":"1689743688816"} 2023-07-19 05:14:48,819 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=19, ppid=12, state=RUNNABLE; OpenRegionProcedure 36230d99e1f0bd83eb4e5988724a475f, server=jenkins-hbase4.apache.org,45681,1689743683028}] 2023-07-19 05:14:48,821 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=13 updating hbase:meta row=f6f6fceaa7e24dc750aa525625e896fa, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,43237,1689743687175 2023-07-19 05:14:48,821 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1689743686152.f6f6fceaa7e24dc750aa525625e896fa.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689743688821"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689743688821"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689743688821"}]},"ts":"1689743688821"} 2023-07-19 05:14:48,824 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=20, ppid=13, state=RUNNABLE; OpenRegionProcedure f6f6fceaa7e24dc750aa525625e896fa, server=jenkins-hbase4.apache.org,43237,1689743687175}] 2023-07-19 05:14:48,977 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,43237,1689743687175 2023-07-19 05:14:48,977 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-19 05:14:48,981 INFO [RS-EventLoopGroup-7-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:53488, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-19 05:14:48,981 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:rsgroup,,1689743686364.36230d99e1f0bd83eb4e5988724a475f. 2023-07-19 05:14:48,981 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 36230d99e1f0bd83eb4e5988724a475f, NAME => 'hbase:rsgroup,,1689743686364.36230d99e1f0bd83eb4e5988724a475f.', STARTKEY => '', ENDKEY => ''} 2023-07-19 05:14:48,982 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-19 05:14:48,982 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:rsgroup,,1689743686364.36230d99e1f0bd83eb4e5988724a475f. service=MultiRowMutationService 2023-07-19 05:14:48,982 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:rsgroup successfully. 2023-07-19 05:14:48,982 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table rsgroup 36230d99e1f0bd83eb4e5988724a475f 2023-07-19 05:14:48,982 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1689743686364.36230d99e1f0bd83eb4e5988724a475f.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-19 05:14:48,983 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 36230d99e1f0bd83eb4e5988724a475f 2023-07-19 05:14:48,983 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 36230d99e1f0bd83eb4e5988724a475f 2023-07-19 05:14:48,984 INFO [StoreOpener-36230d99e1f0bd83eb4e5988724a475f-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family m of region 36230d99e1f0bd83eb4e5988724a475f 2023-07-19 05:14:48,987 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:namespace,,1689743686152.f6f6fceaa7e24dc750aa525625e896fa. 2023-07-19 05:14:48,987 DEBUG [StoreOpener-36230d99e1f0bd83eb4e5988724a475f-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/data/hbase/rsgroup/36230d99e1f0bd83eb4e5988724a475f/m 2023-07-19 05:14:48,988 DEBUG [StoreOpener-36230d99e1f0bd83eb4e5988724a475f-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/data/hbase/rsgroup/36230d99e1f0bd83eb4e5988724a475f/m 2023-07-19 05:14:48,988 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => f6f6fceaa7e24dc750aa525625e896fa, NAME => 'hbase:namespace,,1689743686152.f6f6fceaa7e24dc750aa525625e896fa.', STARTKEY => '', ENDKEY => ''} 2023-07-19 05:14:48,988 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table namespace f6f6fceaa7e24dc750aa525625e896fa 2023-07-19 05:14:48,988 INFO [StoreOpener-36230d99e1f0bd83eb4e5988724a475f-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 36230d99e1f0bd83eb4e5988724a475f columnFamilyName m 2023-07-19 05:14:48,988 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1689743686152.f6f6fceaa7e24dc750aa525625e896fa.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-19 05:14:48,988 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for f6f6fceaa7e24dc750aa525625e896fa 2023-07-19 05:14:48,988 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for f6f6fceaa7e24dc750aa525625e896fa 2023-07-19 05:14:48,991 INFO [StoreOpener-f6f6fceaa7e24dc750aa525625e896fa-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region f6f6fceaa7e24dc750aa525625e896fa 2023-07-19 05:14:48,998 DEBUG [StoreOpener-f6f6fceaa7e24dc750aa525625e896fa-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/data/hbase/namespace/f6f6fceaa7e24dc750aa525625e896fa/info 2023-07-19 05:14:48,998 DEBUG [StoreOpener-f6f6fceaa7e24dc750aa525625e896fa-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/data/hbase/namespace/f6f6fceaa7e24dc750aa525625e896fa/info 2023-07-19 05:14:48,999 INFO [StoreOpener-f6f6fceaa7e24dc750aa525625e896fa-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region f6f6fceaa7e24dc750aa525625e896fa columnFamilyName info 2023-07-19 05:14:48,999 DEBUG [StoreOpener-36230d99e1f0bd83eb4e5988724a475f-1] regionserver.HStore(539): loaded hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/data/hbase/rsgroup/36230d99e1f0bd83eb4e5988724a475f/m/e49e6b0051004bdd8dfbbf994f21c581 2023-07-19 05:14:48,999 INFO [StoreOpener-36230d99e1f0bd83eb4e5988724a475f-1] regionserver.HStore(310): Store=36230d99e1f0bd83eb4e5988724a475f/m, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-19 05:14:49,000 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/data/hbase/rsgroup/36230d99e1f0bd83eb4e5988724a475f 2023-07-19 05:14:49,003 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/data/hbase/rsgroup/36230d99e1f0bd83eb4e5988724a475f 2023-07-19 05:14:49,008 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 36230d99e1f0bd83eb4e5988724a475f 2023-07-19 05:14:49,009 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 36230d99e1f0bd83eb4e5988724a475f; next sequenceid=13; org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy@759d5d64, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-19 05:14:49,009 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 36230d99e1f0bd83eb4e5988724a475f: 2023-07-19 05:14:49,011 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:rsgroup,,1689743686364.36230d99e1f0bd83eb4e5988724a475f., pid=19, masterSystemTime=1689743688973 2023-07-19 05:14:49,011 DEBUG [StoreOpener-f6f6fceaa7e24dc750aa525625e896fa-1] regionserver.HStore(539): loaded hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/data/hbase/namespace/f6f6fceaa7e24dc750aa525625e896fa/info/b0b2cea6d8eb47f8900a07a5b8fd22cf 2023-07-19 05:14:49,011 INFO [StoreOpener-f6f6fceaa7e24dc750aa525625e896fa-1] regionserver.HStore(310): Store=f6f6fceaa7e24dc750aa525625e896fa/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-19 05:14:49,012 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/data/hbase/namespace/f6f6fceaa7e24dc750aa525625e896fa 2023-07-19 05:14:49,013 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:rsgroup,,1689743686364.36230d99e1f0bd83eb4e5988724a475f. 2023-07-19 05:14:49,013 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:rsgroup,,1689743686364.36230d99e1f0bd83eb4e5988724a475f. 2023-07-19 05:14:49,014 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=12 updating hbase:meta row=36230d99e1f0bd83eb4e5988724a475f, regionState=OPEN, openSeqNum=13, regionLocation=jenkins-hbase4.apache.org,45681,1689743683028 2023-07-19 05:14:49,015 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/data/hbase/namespace/f6f6fceaa7e24dc750aa525625e896fa 2023-07-19 05:14:49,015 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:rsgroup,,1689743686364.36230d99e1f0bd83eb4e5988724a475f.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689743689014"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689743689014"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689743689014"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689743689014"}]},"ts":"1689743689014"} 2023-07-19 05:14:49,022 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for f6f6fceaa7e24dc750aa525625e896fa 2023-07-19 05:14:49,024 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened f6f6fceaa7e24dc750aa525625e896fa; next sequenceid=10; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9838791200, jitterRate=-0.08369116485118866}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-19 05:14:49,024 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for f6f6fceaa7e24dc750aa525625e896fa: 2023-07-19 05:14:49,025 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:namespace,,1689743686152.f6f6fceaa7e24dc750aa525625e896fa., pid=20, masterSystemTime=1689743688977 2023-07-19 05:14:49,029 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=19, resume processing ppid=12 2023-07-19 05:14:49,029 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=19, ppid=12, state=SUCCESS; OpenRegionProcedure 36230d99e1f0bd83eb4e5988724a475f, server=jenkins-hbase4.apache.org,45681,1689743683028 in 199 msec 2023-07-19 05:14:49,030 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:namespace,,1689743686152.f6f6fceaa7e24dc750aa525625e896fa. 2023-07-19 05:14:49,031 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:namespace,,1689743686152.f6f6fceaa7e24dc750aa525625e896fa. 2023-07-19 05:14:49,031 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=13 updating hbase:meta row=f6f6fceaa7e24dc750aa525625e896fa, regionState=OPEN, openSeqNum=10, regionLocation=jenkins-hbase4.apache.org,43237,1689743687175 2023-07-19 05:14:49,032 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:namespace,,1689743686152.f6f6fceaa7e24dc750aa525625e896fa.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689743689031"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689743689031"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689743689031"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689743689031"}]},"ts":"1689743689031"} 2023-07-19 05:14:49,032 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=12, state=SUCCESS; TransitRegionStateProcedure table=hbase:rsgroup, region=36230d99e1f0bd83eb4e5988724a475f, REOPEN/MOVE in 1.4670 sec 2023-07-19 05:14:49,037 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=20, resume processing ppid=13 2023-07-19 05:14:49,038 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=20, ppid=13, state=SUCCESS; OpenRegionProcedure f6f6fceaa7e24dc750aa525625e896fa, server=jenkins-hbase4.apache.org,43237,1689743687175 in 210 msec 2023-07-19 05:14:49,039 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=13, state=SUCCESS; TransitRegionStateProcedure table=hbase:namespace, region=f6f6fceaa7e24dc750aa525625e896fa, REOPEN/MOVE in 1.4730 sec 2023-07-19 05:14:49,572 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,41899,1689743683228, jenkins-hbase4.apache.org,41979,1689743683435] are moved back to default 2023-07-19 05:14:49,572 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminServer(438): Move servers done: default => Group_testTableMoveTruncateAndDrop_2021517430 2023-07-19 05:14:49,572 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-19 05:14:49,574 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=41899] ipc.CallRunner(144): callId: 3 service: ClientService methodName: Scan size: 136 connection: 172.31.14.131:41510 deadline: 1689743749574, exception=org.apache.hadoop.hbase.exceptions.RegionMovedException: Region moved to: hostname=jenkins-hbase4.apache.org port=45681 startCode=1689743683028. As of locationSeqNum=9. 2023-07-19 05:14:49,678 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=41979] ipc.CallRunner(144): callId: 4 service: ClientService methodName: Get size: 88 connection: 172.31.14.131:35572 deadline: 1689743749678, exception=org.apache.hadoop.hbase.exceptions.RegionMovedException: Region moved to: hostname=jenkins-hbase4.apache.org port=45681 startCode=1689743683028. As of locationSeqNum=16. 2023-07-19 05:14:49,780 DEBUG [hconnection-0x5b070797-shared-pool-2] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-19 05:14:49,782 INFO [RS-EventLoopGroup-3-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:44160, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-19 05:14:49,808 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-19 05:14:49,808 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-19 05:14:49,813 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=Group_testTableMoveTruncateAndDrop_2021517430 2023-07-19 05:14:49,813 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-19 05:14:49,825 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-19 05:14:49,827 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] procedure2.ProcedureExecutor(1029): Stored pid=21, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-19 05:14:49,829 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=21, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop execute state=CREATE_TABLE_PRE_OPERATION 2023-07-19 05:14:49,832 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=41899] ipc.CallRunner(144): callId: 50 service: ClientService methodName: ExecService size: 622 connection: 172.31.14.131:41500 deadline: 1689743749832, exception=org.apache.hadoop.hbase.exceptions.RegionMovedException: Region moved to: hostname=jenkins-hbase4.apache.org port=45681 startCode=1689743683028. As of locationSeqNum=9. 2023-07-19 05:14:49,834 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "Group_testTableMoveTruncateAndDrop" procId is: 21 2023-07-19 05:14:49,845 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.MasterRpcServices(1230): Checking to see if procedure is done pid=21 2023-07-19 05:14:49,944 DEBUG [PEWorker-1] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 05:14:49,945 DEBUG [PEWorker-1] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testTableMoveTruncateAndDrop_2021517430 2023-07-19 05:14:49,946 DEBUG [PEWorker-1] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-19 05:14:49,947 DEBUG [PEWorker-1] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-19 05:14:49,951 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.MasterRpcServices(1230): Checking to see if procedure is done pid=21 2023-07-19 05:14:49,957 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=21, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-19 05:14:49,965 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/.tmp/data/default/Group_testTableMoveTruncateAndDrop/329b1dced0c6dac68200f800600fec43 2023-07-19 05:14:49,965 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/.tmp/data/default/Group_testTableMoveTruncateAndDrop/6790c8e1d988dd7656ea21c548741328 2023-07-19 05:14:49,967 DEBUG [HFileArchiver-3] backup.HFileArchiver(153): Directory hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/.tmp/data/default/Group_testTableMoveTruncateAndDrop/329b1dced0c6dac68200f800600fec43 empty. 2023-07-19 05:14:49,967 DEBUG [HFileArchiver-5] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/.tmp/data/default/Group_testTableMoveTruncateAndDrop/246041b22a4c0c68769b13202d5db25c 2023-07-19 05:14:49,972 DEBUG [HFileArchiver-5] backup.HFileArchiver(153): Directory hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/.tmp/data/default/Group_testTableMoveTruncateAndDrop/246041b22a4c0c68769b13202d5db25c empty. 2023-07-19 05:14:49,972 DEBUG [HFileArchiver-6] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/.tmp/data/default/Group_testTableMoveTruncateAndDrop/724483523107c4b407be49a4be4a8c59 2023-07-19 05:14:49,972 DEBUG [HFileArchiver-3] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/.tmp/data/default/Group_testTableMoveTruncateAndDrop/329b1dced0c6dac68200f800600fec43 2023-07-19 05:14:49,975 DEBUG [HFileArchiver-7] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/.tmp/data/default/Group_testTableMoveTruncateAndDrop/579d97cc3415548daa398aa865f7d496 2023-07-19 05:14:49,977 DEBUG [HFileArchiver-4] backup.HFileArchiver(153): Directory hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/.tmp/data/default/Group_testTableMoveTruncateAndDrop/6790c8e1d988dd7656ea21c548741328 empty. 2023-07-19 05:14:49,979 DEBUG [HFileArchiver-7] backup.HFileArchiver(153): Directory hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/.tmp/data/default/Group_testTableMoveTruncateAndDrop/579d97cc3415548daa398aa865f7d496 empty. 2023-07-19 05:14:49,979 DEBUG [HFileArchiver-6] backup.HFileArchiver(153): Directory hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/.tmp/data/default/Group_testTableMoveTruncateAndDrop/724483523107c4b407be49a4be4a8c59 empty. 2023-07-19 05:14:49,980 DEBUG [HFileArchiver-5] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/.tmp/data/default/Group_testTableMoveTruncateAndDrop/246041b22a4c0c68769b13202d5db25c 2023-07-19 05:14:49,981 DEBUG [HFileArchiver-6] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/.tmp/data/default/Group_testTableMoveTruncateAndDrop/724483523107c4b407be49a4be4a8c59 2023-07-19 05:14:49,981 DEBUG [HFileArchiver-7] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/.tmp/data/default/Group_testTableMoveTruncateAndDrop/579d97cc3415548daa398aa865f7d496 2023-07-19 05:14:49,981 DEBUG [HFileArchiver-4] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/.tmp/data/default/Group_testTableMoveTruncateAndDrop/6790c8e1d988dd7656ea21c548741328 2023-07-19 05:14:49,982 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(328): Archived Group_testTableMoveTruncateAndDrop regions 2023-07-19 05:14:50,040 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/.tmp/data/default/Group_testTableMoveTruncateAndDrop/.tabledesc/.tableinfo.0000000001 2023-07-19 05:14:50,041 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(7675): creating {ENCODED => 329b1dced0c6dac68200f800600fec43, NAME => 'Group_testTableMoveTruncateAndDrop,,1689743689823.329b1dced0c6dac68200f800600fec43.', STARTKEY => '', ENDKEY => 'aaaaa'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/.tmp 2023-07-19 05:14:50,043 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(7675): creating {ENCODED => 246041b22a4c0c68769b13202d5db25c, NAME => 'Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689743689823.246041b22a4c0c68769b13202d5db25c.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/.tmp 2023-07-19 05:14:50,043 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(7675): creating {ENCODED => 6790c8e1d988dd7656ea21c548741328, NAME => 'Group_testTableMoveTruncateAndDrop,aaaaa,1689743689823.6790c8e1d988dd7656ea21c548741328.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/.tmp 2023-07-19 05:14:50,099 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,,1689743689823.329b1dced0c6dac68200f800600fec43.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-19 05:14:50,101 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1604): Closing 329b1dced0c6dac68200f800600fec43, disabling compactions & flushes 2023-07-19 05:14:50,101 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,,1689743689823.329b1dced0c6dac68200f800600fec43. 2023-07-19 05:14:50,101 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,,1689743689823.329b1dced0c6dac68200f800600fec43. 2023-07-19 05:14:50,101 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,,1689743689823.329b1dced0c6dac68200f800600fec43. after waiting 0 ms 2023-07-19 05:14:50,101 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,,1689743689823.329b1dced0c6dac68200f800600fec43. 2023-07-19 05:14:50,102 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,,1689743689823.329b1dced0c6dac68200f800600fec43. 2023-07-19 05:14:50,102 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1558): Region close journal for 329b1dced0c6dac68200f800600fec43: 2023-07-19 05:14:50,104 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(7675): creating {ENCODED => 724483523107c4b407be49a4be4a8c59, NAME => 'Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689743689823.724483523107c4b407be49a4be4a8c59.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/.tmp 2023-07-19 05:14:50,111 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,aaaaa,1689743689823.6790c8e1d988dd7656ea21c548741328.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-19 05:14:50,117 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1604): Closing 6790c8e1d988dd7656ea21c548741328, disabling compactions & flushes 2023-07-19 05:14:50,117 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,aaaaa,1689743689823.6790c8e1d988dd7656ea21c548741328. 2023-07-19 05:14:50,117 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689743689823.6790c8e1d988dd7656ea21c548741328. 2023-07-19 05:14:50,117 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689743689823.6790c8e1d988dd7656ea21c548741328. after waiting 0 ms 2023-07-19 05:14:50,117 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,aaaaa,1689743689823.6790c8e1d988dd7656ea21c548741328. 2023-07-19 05:14:50,117 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,aaaaa,1689743689823.6790c8e1d988dd7656ea21c548741328. 2023-07-19 05:14:50,118 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1558): Region close journal for 6790c8e1d988dd7656ea21c548741328: 2023-07-19 05:14:50,118 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(7675): creating {ENCODED => 579d97cc3415548daa398aa865f7d496, NAME => 'Group_testTableMoveTruncateAndDrop,zzzzz,1689743689823.579d97cc3415548daa398aa865f7d496.', STARTKEY => 'zzzzz', ENDKEY => ''}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/.tmp 2023-07-19 05:14:50,118 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689743689823.246041b22a4c0c68769b13202d5db25c.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-19 05:14:50,119 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1604): Closing 246041b22a4c0c68769b13202d5db25c, disabling compactions & flushes 2023-07-19 05:14:50,119 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689743689823.246041b22a4c0c68769b13202d5db25c. 2023-07-19 05:14:50,119 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689743689823.246041b22a4c0c68769b13202d5db25c. 2023-07-19 05:14:50,119 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689743689823.246041b22a4c0c68769b13202d5db25c. after waiting 0 ms 2023-07-19 05:14:50,120 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689743689823.246041b22a4c0c68769b13202d5db25c. 2023-07-19 05:14:50,120 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689743689823.246041b22a4c0c68769b13202d5db25c. 2023-07-19 05:14:50,120 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1558): Region close journal for 246041b22a4c0c68769b13202d5db25c: 2023-07-19 05:14:50,154 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.MasterRpcServices(1230): Checking to see if procedure is done pid=21 2023-07-19 05:14:50,215 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689743689823.724483523107c4b407be49a4be4a8c59.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-19 05:14:50,215 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1604): Closing 724483523107c4b407be49a4be4a8c59, disabling compactions & flushes 2023-07-19 05:14:50,215 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689743689823.724483523107c4b407be49a4be4a8c59. 2023-07-19 05:14:50,215 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689743689823.724483523107c4b407be49a4be4a8c59. 2023-07-19 05:14:50,215 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689743689823.724483523107c4b407be49a4be4a8c59. after waiting 0 ms 2023-07-19 05:14:50,215 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689743689823.724483523107c4b407be49a4be4a8c59. 2023-07-19 05:14:50,215 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689743689823.724483523107c4b407be49a4be4a8c59. 2023-07-19 05:14:50,215 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1558): Region close journal for 724483523107c4b407be49a4be4a8c59: 2023-07-19 05:14:50,462 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.MasterRpcServices(1230): Checking to see if procedure is done pid=21 2023-07-19 05:14:50,618 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,zzzzz,1689743689823.579d97cc3415548daa398aa865f7d496.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-19 05:14:50,618 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1604): Closing 579d97cc3415548daa398aa865f7d496, disabling compactions & flushes 2023-07-19 05:14:50,618 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,zzzzz,1689743689823.579d97cc3415548daa398aa865f7d496. 2023-07-19 05:14:50,618 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689743689823.579d97cc3415548daa398aa865f7d496. 2023-07-19 05:14:50,618 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689743689823.579d97cc3415548daa398aa865f7d496. after waiting 0 ms 2023-07-19 05:14:50,618 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,zzzzz,1689743689823.579d97cc3415548daa398aa865f7d496. 2023-07-19 05:14:50,618 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,zzzzz,1689743689823.579d97cc3415548daa398aa865f7d496. 2023-07-19 05:14:50,618 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1558): Region close journal for 579d97cc3415548daa398aa865f7d496: 2023-07-19 05:14:50,622 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=21, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop execute state=CREATE_TABLE_ADD_TO_META 2023-07-19 05:14:50,624 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,,1689743689823.329b1dced0c6dac68200f800600fec43.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689743690623"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689743690623"}]},"ts":"1689743690623"} 2023-07-19 05:14:50,624 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689743689823.6790c8e1d988dd7656ea21c548741328.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689743690623"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689743690623"}]},"ts":"1689743690623"} 2023-07-19 05:14:50,624 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689743689823.246041b22a4c0c68769b13202d5db25c.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689743690623"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689743690623"}]},"ts":"1689743690623"} 2023-07-19 05:14:50,624 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689743689823.724483523107c4b407be49a4be4a8c59.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689743690623"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689743690623"}]},"ts":"1689743690623"} 2023-07-19 05:14:50,625 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689743689823.579d97cc3415548daa398aa865f7d496.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689743690623"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689743690623"}]},"ts":"1689743690623"} 2023-07-19 05:14:50,673 INFO [PEWorker-1] hbase.MetaTableAccessor(1496): Added 5 regions to meta. 2023-07-19 05:14:50,675 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=21, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-19 05:14:50,675 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689743690675"}]},"ts":"1689743690675"} 2023-07-19 05:14:50,677 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=ENABLING in hbase:meta 2023-07-19 05:14:50,683 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-19 05:14:50,683 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-19 05:14:50,683 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-19 05:14:50,683 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-19 05:14:50,684 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=22, ppid=21, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=329b1dced0c6dac68200f800600fec43, ASSIGN}, {pid=23, ppid=21, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=6790c8e1d988dd7656ea21c548741328, ASSIGN}, {pid=24, ppid=21, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=246041b22a4c0c68769b13202d5db25c, ASSIGN}, {pid=25, ppid=21, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=724483523107c4b407be49a4be4a8c59, ASSIGN}, {pid=26, ppid=21, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=579d97cc3415548daa398aa865f7d496, ASSIGN}] 2023-07-19 05:14:50,686 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=23, ppid=21, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=6790c8e1d988dd7656ea21c548741328, ASSIGN 2023-07-19 05:14:50,687 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=24, ppid=21, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=246041b22a4c0c68769b13202d5db25c, ASSIGN 2023-07-19 05:14:50,687 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=22, ppid=21, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=329b1dced0c6dac68200f800600fec43, ASSIGN 2023-07-19 05:14:50,687 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=25, ppid=21, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=724483523107c4b407be49a4be4a8c59, ASSIGN 2023-07-19 05:14:50,689 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=23, ppid=21, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=6790c8e1d988dd7656ea21c548741328, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,45681,1689743683028; forceNewPlan=false, retain=false 2023-07-19 05:14:50,690 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=24, ppid=21, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=246041b22a4c0c68769b13202d5db25c, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,43237,1689743687175; forceNewPlan=false, retain=false 2023-07-19 05:14:50,690 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=25, ppid=21, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=724483523107c4b407be49a4be4a8c59, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,45681,1689743683028; forceNewPlan=false, retain=false 2023-07-19 05:14:50,690 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=22, ppid=21, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=329b1dced0c6dac68200f800600fec43, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,43237,1689743687175; forceNewPlan=false, retain=false 2023-07-19 05:14:50,691 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=26, ppid=21, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=579d97cc3415548daa398aa865f7d496, ASSIGN 2023-07-19 05:14:50,692 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=26, ppid=21, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=579d97cc3415548daa398aa865f7d496, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,43237,1689743687175; forceNewPlan=false, retain=false 2023-07-19 05:14:50,839 INFO [jenkins-hbase4:35853] balancer.BaseLoadBalancer(1545): Reassigned 5 regions. 5 retained the pre-restart assignment. 2023-07-19 05:14:50,845 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=25 updating hbase:meta row=724483523107c4b407be49a4be4a8c59, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,45681,1689743683028 2023-07-19 05:14:50,845 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689743689823.724483523107c4b407be49a4be4a8c59.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689743690845"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689743690845"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689743690845"}]},"ts":"1689743690845"} 2023-07-19 05:14:50,846 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=23 updating hbase:meta row=6790c8e1d988dd7656ea21c548741328, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,45681,1689743683028 2023-07-19 05:14:50,846 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689743689823.6790c8e1d988dd7656ea21c548741328.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689743690845"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689743690845"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689743690845"}]},"ts":"1689743690845"} 2023-07-19 05:14:50,846 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=26 updating hbase:meta row=579d97cc3415548daa398aa865f7d496, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,43237,1689743687175 2023-07-19 05:14:50,846 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=24 updating hbase:meta row=246041b22a4c0c68769b13202d5db25c, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,43237,1689743687175 2023-07-19 05:14:50,846 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=22 updating hbase:meta row=329b1dced0c6dac68200f800600fec43, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,43237,1689743687175 2023-07-19 05:14:50,847 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,,1689743689823.329b1dced0c6dac68200f800600fec43.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689743690846"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689743690846"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689743690846"}]},"ts":"1689743690846"} 2023-07-19 05:14:50,847 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689743689823.246041b22a4c0c68769b13202d5db25c.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689743690846"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689743690846"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689743690846"}]},"ts":"1689743690846"} 2023-07-19 05:14:50,847 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689743689823.579d97cc3415548daa398aa865f7d496.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689743690846"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689743690846"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689743690846"}]},"ts":"1689743690846"} 2023-07-19 05:14:50,849 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=27, ppid=25, state=RUNNABLE; OpenRegionProcedure 724483523107c4b407be49a4be4a8c59, server=jenkins-hbase4.apache.org,45681,1689743683028}] 2023-07-19 05:14:50,854 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=28, ppid=23, state=RUNNABLE; OpenRegionProcedure 6790c8e1d988dd7656ea21c548741328, server=jenkins-hbase4.apache.org,45681,1689743683028}] 2023-07-19 05:14:50,856 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=29, ppid=22, state=RUNNABLE; OpenRegionProcedure 329b1dced0c6dac68200f800600fec43, server=jenkins-hbase4.apache.org,43237,1689743687175}] 2023-07-19 05:14:50,859 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=30, ppid=24, state=RUNNABLE; OpenRegionProcedure 246041b22a4c0c68769b13202d5db25c, server=jenkins-hbase4.apache.org,43237,1689743687175}] 2023-07-19 05:14:50,861 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=31, ppid=26, state=RUNNABLE; OpenRegionProcedure 579d97cc3415548daa398aa865f7d496, server=jenkins-hbase4.apache.org,43237,1689743687175}] 2023-07-19 05:14:50,964 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.MasterRpcServices(1230): Checking to see if procedure is done pid=21 2023-07-19 05:14:51,077 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,zzzzz,1689743689823.579d97cc3415548daa398aa865f7d496. 2023-07-19 05:14:51,077 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689743689823.724483523107c4b407be49a4be4a8c59. 2023-07-19 05:14:51,077 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 579d97cc3415548daa398aa865f7d496, NAME => 'Group_testTableMoveTruncateAndDrop,zzzzz,1689743689823.579d97cc3415548daa398aa865f7d496.', STARTKEY => 'zzzzz', ENDKEY => ''} 2023-07-19 05:14:51,077 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 724483523107c4b407be49a4be4a8c59, NAME => 'Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689743689823.724483523107c4b407be49a4be4a8c59.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'} 2023-07-19 05:14:51,078 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 579d97cc3415548daa398aa865f7d496 2023-07-19 05:14:51,078 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 724483523107c4b407be49a4be4a8c59 2023-07-19 05:14:51,078 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,zzzzz,1689743689823.579d97cc3415548daa398aa865f7d496.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-19 05:14:51,078 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689743689823.724483523107c4b407be49a4be4a8c59.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-19 05:14:51,078 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 579d97cc3415548daa398aa865f7d496 2023-07-19 05:14:51,078 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 724483523107c4b407be49a4be4a8c59 2023-07-19 05:14:51,078 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 579d97cc3415548daa398aa865f7d496 2023-07-19 05:14:51,078 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 724483523107c4b407be49a4be4a8c59 2023-07-19 05:14:51,081 INFO [StoreOpener-579d97cc3415548daa398aa865f7d496-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 579d97cc3415548daa398aa865f7d496 2023-07-19 05:14:51,083 INFO [StoreOpener-724483523107c4b407be49a4be4a8c59-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 724483523107c4b407be49a4be4a8c59 2023-07-19 05:14:51,085 DEBUG [StoreOpener-579d97cc3415548daa398aa865f7d496-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/data/default/Group_testTableMoveTruncateAndDrop/579d97cc3415548daa398aa865f7d496/f 2023-07-19 05:14:51,085 DEBUG [StoreOpener-579d97cc3415548daa398aa865f7d496-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/data/default/Group_testTableMoveTruncateAndDrop/579d97cc3415548daa398aa865f7d496/f 2023-07-19 05:14:51,087 INFO [StoreOpener-579d97cc3415548daa398aa865f7d496-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 579d97cc3415548daa398aa865f7d496 columnFamilyName f 2023-07-19 05:14:51,087 DEBUG [StoreOpener-724483523107c4b407be49a4be4a8c59-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/data/default/Group_testTableMoveTruncateAndDrop/724483523107c4b407be49a4be4a8c59/f 2023-07-19 05:14:51,087 DEBUG [StoreOpener-724483523107c4b407be49a4be4a8c59-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/data/default/Group_testTableMoveTruncateAndDrop/724483523107c4b407be49a4be4a8c59/f 2023-07-19 05:14:51,087 INFO [StoreOpener-579d97cc3415548daa398aa865f7d496-1] regionserver.HStore(310): Store=579d97cc3415548daa398aa865f7d496/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-19 05:14:51,089 INFO [StoreOpener-724483523107c4b407be49a4be4a8c59-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 724483523107c4b407be49a4be4a8c59 columnFamilyName f 2023-07-19 05:14:51,089 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/data/default/Group_testTableMoveTruncateAndDrop/579d97cc3415548daa398aa865f7d496 2023-07-19 05:14:51,090 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/data/default/Group_testTableMoveTruncateAndDrop/579d97cc3415548daa398aa865f7d496 2023-07-19 05:14:51,090 INFO [StoreOpener-724483523107c4b407be49a4be4a8c59-1] regionserver.HStore(310): Store=724483523107c4b407be49a4be4a8c59/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-19 05:14:51,092 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/data/default/Group_testTableMoveTruncateAndDrop/724483523107c4b407be49a4be4a8c59 2023-07-19 05:14:51,093 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/data/default/Group_testTableMoveTruncateAndDrop/724483523107c4b407be49a4be4a8c59 2023-07-19 05:14:51,096 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 579d97cc3415548daa398aa865f7d496 2023-07-19 05:14:51,098 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 724483523107c4b407be49a4be4a8c59 2023-07-19 05:14:51,102 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/data/default/Group_testTableMoveTruncateAndDrop/579d97cc3415548daa398aa865f7d496/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-19 05:14:51,103 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 579d97cc3415548daa398aa865f7d496; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10017991840, jitterRate=-0.06700180470943451}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-19 05:14:51,103 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 579d97cc3415548daa398aa865f7d496: 2023-07-19 05:14:51,105 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,zzzzz,1689743689823.579d97cc3415548daa398aa865f7d496., pid=31, masterSystemTime=1689743691052 2023-07-19 05:14:51,108 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,zzzzz,1689743689823.579d97cc3415548daa398aa865f7d496. 2023-07-19 05:14:51,108 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,zzzzz,1689743689823.579d97cc3415548daa398aa865f7d496. 2023-07-19 05:14:51,108 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689743689823.246041b22a4c0c68769b13202d5db25c. 2023-07-19 05:14:51,108 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 246041b22a4c0c68769b13202d5db25c, NAME => 'Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689743689823.246041b22a4c0c68769b13202d5db25c.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'} 2023-07-19 05:14:51,109 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 246041b22a4c0c68769b13202d5db25c 2023-07-19 05:14:51,109 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689743689823.246041b22a4c0c68769b13202d5db25c.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-19 05:14:51,109 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 246041b22a4c0c68769b13202d5db25c 2023-07-19 05:14:51,109 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 246041b22a4c0c68769b13202d5db25c 2023-07-19 05:14:51,110 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=26 updating hbase:meta row=579d97cc3415548daa398aa865f7d496, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,43237,1689743687175 2023-07-19 05:14:51,110 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689743689823.579d97cc3415548daa398aa865f7d496.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689743691110"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689743691110"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689743691110"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689743691110"}]},"ts":"1689743691110"} 2023-07-19 05:14:51,111 INFO [StoreOpener-246041b22a4c0c68769b13202d5db25c-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 246041b22a4c0c68769b13202d5db25c 2023-07-19 05:14:51,118 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=31, resume processing ppid=26 2023-07-19 05:14:51,119 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=31, ppid=26, state=SUCCESS; OpenRegionProcedure 579d97cc3415548daa398aa865f7d496, server=jenkins-hbase4.apache.org,43237,1689743687175 in 252 msec 2023-07-19 05:14:51,120 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/data/default/Group_testTableMoveTruncateAndDrop/724483523107c4b407be49a4be4a8c59/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-19 05:14:51,122 DEBUG [StoreOpener-246041b22a4c0c68769b13202d5db25c-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/data/default/Group_testTableMoveTruncateAndDrop/246041b22a4c0c68769b13202d5db25c/f 2023-07-19 05:14:51,122 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 724483523107c4b407be49a4be4a8c59; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11493092800, jitterRate=0.07037767767906189}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-19 05:14:51,122 DEBUG [StoreOpener-246041b22a4c0c68769b13202d5db25c-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/data/default/Group_testTableMoveTruncateAndDrop/246041b22a4c0c68769b13202d5db25c/f 2023-07-19 05:14:51,122 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 724483523107c4b407be49a4be4a8c59: 2023-07-19 05:14:51,122 INFO [StoreOpener-246041b22a4c0c68769b13202d5db25c-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 246041b22a4c0c68769b13202d5db25c columnFamilyName f 2023-07-19 05:14:51,122 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=26, ppid=21, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=579d97cc3415548daa398aa865f7d496, ASSIGN in 434 msec 2023-07-19 05:14:51,123 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689743689823.724483523107c4b407be49a4be4a8c59., pid=27, masterSystemTime=1689743691049 2023-07-19 05:14:51,124 INFO [StoreOpener-246041b22a4c0c68769b13202d5db25c-1] regionserver.HStore(310): Store=246041b22a4c0c68769b13202d5db25c/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-19 05:14:51,125 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/data/default/Group_testTableMoveTruncateAndDrop/246041b22a4c0c68769b13202d5db25c 2023-07-19 05:14:51,125 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/data/default/Group_testTableMoveTruncateAndDrop/246041b22a4c0c68769b13202d5db25c 2023-07-19 05:14:51,127 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689743689823.724483523107c4b407be49a4be4a8c59. 2023-07-19 05:14:51,127 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689743689823.724483523107c4b407be49a4be4a8c59. 2023-07-19 05:14:51,127 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,aaaaa,1689743689823.6790c8e1d988dd7656ea21c548741328. 2023-07-19 05:14:51,128 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 6790c8e1d988dd7656ea21c548741328, NAME => 'Group_testTableMoveTruncateAndDrop,aaaaa,1689743689823.6790c8e1d988dd7656ea21c548741328.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'} 2023-07-19 05:14:51,128 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 6790c8e1d988dd7656ea21c548741328 2023-07-19 05:14:51,128 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=25 updating hbase:meta row=724483523107c4b407be49a4be4a8c59, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,45681,1689743683028 2023-07-19 05:14:51,128 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,aaaaa,1689743689823.6790c8e1d988dd7656ea21c548741328.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-19 05:14:51,128 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 6790c8e1d988dd7656ea21c548741328 2023-07-19 05:14:51,128 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 6790c8e1d988dd7656ea21c548741328 2023-07-19 05:14:51,128 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689743689823.724483523107c4b407be49a4be4a8c59.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689743691128"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689743691128"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689743691128"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689743691128"}]},"ts":"1689743691128"} 2023-07-19 05:14:51,130 INFO [StoreOpener-6790c8e1d988dd7656ea21c548741328-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 6790c8e1d988dd7656ea21c548741328 2023-07-19 05:14:51,131 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 246041b22a4c0c68769b13202d5db25c 2023-07-19 05:14:51,133 DEBUG [StoreOpener-6790c8e1d988dd7656ea21c548741328-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/data/default/Group_testTableMoveTruncateAndDrop/6790c8e1d988dd7656ea21c548741328/f 2023-07-19 05:14:51,133 DEBUG [StoreOpener-6790c8e1d988dd7656ea21c548741328-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/data/default/Group_testTableMoveTruncateAndDrop/6790c8e1d988dd7656ea21c548741328/f 2023-07-19 05:14:51,134 INFO [StoreOpener-6790c8e1d988dd7656ea21c548741328-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 6790c8e1d988dd7656ea21c548741328 columnFamilyName f 2023-07-19 05:14:51,135 INFO [StoreOpener-6790c8e1d988dd7656ea21c548741328-1] regionserver.HStore(310): Store=6790c8e1d988dd7656ea21c548741328/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-19 05:14:51,135 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/data/default/Group_testTableMoveTruncateAndDrop/246041b22a4c0c68769b13202d5db25c/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-19 05:14:51,135 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/data/default/Group_testTableMoveTruncateAndDrop/6790c8e1d988dd7656ea21c548741328 2023-07-19 05:14:51,142 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 246041b22a4c0c68769b13202d5db25c; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10837801440, jitterRate=0.00934891402721405}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-19 05:14:51,142 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 246041b22a4c0c68769b13202d5db25c: 2023-07-19 05:14:51,142 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/data/default/Group_testTableMoveTruncateAndDrop/6790c8e1d988dd7656ea21c548741328 2023-07-19 05:14:51,144 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689743689823.246041b22a4c0c68769b13202d5db25c., pid=30, masterSystemTime=1689743691052 2023-07-19 05:14:51,150 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 6790c8e1d988dd7656ea21c548741328 2023-07-19 05:14:51,150 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689743689823.246041b22a4c0c68769b13202d5db25c. 2023-07-19 05:14:51,150 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689743689823.246041b22a4c0c68769b13202d5db25c. 2023-07-19 05:14:51,150 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,,1689743689823.329b1dced0c6dac68200f800600fec43. 2023-07-19 05:14:51,151 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 329b1dced0c6dac68200f800600fec43, NAME => 'Group_testTableMoveTruncateAndDrop,,1689743689823.329b1dced0c6dac68200f800600fec43.', STARTKEY => '', ENDKEY => 'aaaaa'} 2023-07-19 05:14:51,151 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 329b1dced0c6dac68200f800600fec43 2023-07-19 05:14:51,151 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=27, resume processing ppid=25 2023-07-19 05:14:51,151 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,,1689743689823.329b1dced0c6dac68200f800600fec43.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-19 05:14:51,151 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=27, ppid=25, state=SUCCESS; OpenRegionProcedure 724483523107c4b407be49a4be4a8c59, server=jenkins-hbase4.apache.org,45681,1689743683028 in 283 msec 2023-07-19 05:14:51,151 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 329b1dced0c6dac68200f800600fec43 2023-07-19 05:14:51,151 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=24 updating hbase:meta row=246041b22a4c0c68769b13202d5db25c, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,43237,1689743687175 2023-07-19 05:14:51,151 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 329b1dced0c6dac68200f800600fec43 2023-07-19 05:14:51,152 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689743689823.246041b22a4c0c68769b13202d5db25c.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689743691151"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689743691151"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689743691151"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689743691151"}]},"ts":"1689743691151"} 2023-07-19 05:14:51,154 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/data/default/Group_testTableMoveTruncateAndDrop/6790c8e1d988dd7656ea21c548741328/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-19 05:14:51,154 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=25, ppid=21, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=724483523107c4b407be49a4be4a8c59, ASSIGN in 467 msec 2023-07-19 05:14:51,160 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 6790c8e1d988dd7656ea21c548741328; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10536158560, jitterRate=-0.018743768334388733}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-19 05:14:51,160 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 6790c8e1d988dd7656ea21c548741328: 2023-07-19 05:14:51,161 INFO [StoreOpener-329b1dced0c6dac68200f800600fec43-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 329b1dced0c6dac68200f800600fec43 2023-07-19 05:14:51,161 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,aaaaa,1689743689823.6790c8e1d988dd7656ea21c548741328., pid=28, masterSystemTime=1689743691049 2023-07-19 05:14:51,163 DEBUG [StoreOpener-329b1dced0c6dac68200f800600fec43-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/data/default/Group_testTableMoveTruncateAndDrop/329b1dced0c6dac68200f800600fec43/f 2023-07-19 05:14:51,163 DEBUG [StoreOpener-329b1dced0c6dac68200f800600fec43-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/data/default/Group_testTableMoveTruncateAndDrop/329b1dced0c6dac68200f800600fec43/f 2023-07-19 05:14:51,164 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,aaaaa,1689743689823.6790c8e1d988dd7656ea21c548741328. 2023-07-19 05:14:51,164 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,aaaaa,1689743689823.6790c8e1d988dd7656ea21c548741328. 2023-07-19 05:14:51,165 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=23 updating hbase:meta row=6790c8e1d988dd7656ea21c548741328, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,45681,1689743683028 2023-07-19 05:14:51,166 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689743689823.6790c8e1d988dd7656ea21c548741328.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689743691165"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689743691165"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689743691165"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689743691165"}]},"ts":"1689743691165"} 2023-07-19 05:14:51,166 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=30, resume processing ppid=24 2023-07-19 05:14:51,166 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=30, ppid=24, state=SUCCESS; OpenRegionProcedure 246041b22a4c0c68769b13202d5db25c, server=jenkins-hbase4.apache.org,43237,1689743687175 in 304 msec 2023-07-19 05:14:51,168 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=24, ppid=21, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=246041b22a4c0c68769b13202d5db25c, ASSIGN in 482 msec 2023-07-19 05:14:51,171 INFO [StoreOpener-329b1dced0c6dac68200f800600fec43-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 329b1dced0c6dac68200f800600fec43 columnFamilyName f 2023-07-19 05:14:51,171 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=28, resume processing ppid=23 2023-07-19 05:14:51,172 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=28, ppid=23, state=SUCCESS; OpenRegionProcedure 6790c8e1d988dd7656ea21c548741328, server=jenkins-hbase4.apache.org,45681,1689743683028 in 315 msec 2023-07-19 05:14:51,172 INFO [StoreOpener-329b1dced0c6dac68200f800600fec43-1] regionserver.HStore(310): Store=329b1dced0c6dac68200f800600fec43/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-19 05:14:51,173 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/data/default/Group_testTableMoveTruncateAndDrop/329b1dced0c6dac68200f800600fec43 2023-07-19 05:14:51,174 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/data/default/Group_testTableMoveTruncateAndDrop/329b1dced0c6dac68200f800600fec43 2023-07-19 05:14:51,175 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=23, ppid=21, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=6790c8e1d988dd7656ea21c548741328, ASSIGN in 488 msec 2023-07-19 05:14:51,178 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 329b1dced0c6dac68200f800600fec43 2023-07-19 05:14:51,180 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/data/default/Group_testTableMoveTruncateAndDrop/329b1dced0c6dac68200f800600fec43/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-19 05:14:51,181 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 329b1dced0c6dac68200f800600fec43; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10321128000, jitterRate=-0.038770049810409546}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-19 05:14:51,181 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 329b1dced0c6dac68200f800600fec43: 2023-07-19 05:14:51,182 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,,1689743689823.329b1dced0c6dac68200f800600fec43., pid=29, masterSystemTime=1689743691052 2023-07-19 05:14:51,185 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,,1689743689823.329b1dced0c6dac68200f800600fec43. 2023-07-19 05:14:51,186 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,,1689743689823.329b1dced0c6dac68200f800600fec43. 2023-07-19 05:14:51,186 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=22 updating hbase:meta row=329b1dced0c6dac68200f800600fec43, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,43237,1689743687175 2023-07-19 05:14:51,186 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,,1689743689823.329b1dced0c6dac68200f800600fec43.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689743691186"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689743691186"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689743691186"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689743691186"}]},"ts":"1689743691186"} 2023-07-19 05:14:51,191 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=29, resume processing ppid=22 2023-07-19 05:14:51,191 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=29, ppid=22, state=SUCCESS; OpenRegionProcedure 329b1dced0c6dac68200f800600fec43, server=jenkins-hbase4.apache.org,43237,1689743687175 in 333 msec 2023-07-19 05:14:51,194 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=22, resume processing ppid=21 2023-07-19 05:14:51,194 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=22, ppid=21, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=329b1dced0c6dac68200f800600fec43, ASSIGN in 507 msec 2023-07-19 05:14:51,195 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=21, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-19 05:14:51,195 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689743691195"}]},"ts":"1689743691195"} 2023-07-19 05:14:51,198 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=ENABLED in hbase:meta 2023-07-19 05:14:51,202 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=21, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop execute state=CREATE_TABLE_POST_OPERATION 2023-07-19 05:14:51,204 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=21, state=SUCCESS; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop in 1.3770 sec 2023-07-19 05:14:51,577 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-hbase.properties,hadoop-metrics2.properties 2023-07-19 05:14:51,654 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(130): Registering adapter for the MetricRegistry: Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver 2023-07-19 05:14:51,655 INFO [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(134): Registering Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver Metrics about HBase MasterObservers 2023-07-19 05:14:51,655 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(130): Registering adapter for the MetricRegistry: RegionServer,sub=Coprocessor.Region.CP_org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-19 05:14:51,655 INFO [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(134): Registering RegionServer,sub=Coprocessor.Region.CP_org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint Metrics about HBase RegionObservers 2023-07-19 05:14:51,656 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(130): Registering adapter for the MetricRegistry: Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint 2023-07-19 05:14:51,656 INFO [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(134): Registering Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint Metrics about HBase MasterObservers 2023-07-19 05:14:51,657 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'Group_testTableMoveTruncateAndDrop' 2023-07-19 05:14:51,966 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.MasterRpcServices(1230): Checking to see if procedure is done pid=21 2023-07-19 05:14:51,967 INFO [Listener at localhost/38799] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:Group_testTableMoveTruncateAndDrop, procId: 21 completed 2023-07-19 05:14:51,967 DEBUG [Listener at localhost/38799] hbase.HBaseTestingUtility(3430): Waiting until all regions of table Group_testTableMoveTruncateAndDrop get assigned. Timeout = 60000ms 2023-07-19 05:14:51,968 INFO [Listener at localhost/38799] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-19 05:14:51,976 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=41979] ipc.CallRunner(144): callId: 51 service: ClientService methodName: Scan size: 95 connection: 172.31.14.131:35558 deadline: 1689743751975, exception=org.apache.hadoop.hbase.exceptions.RegionMovedException: Region moved to: hostname=jenkins-hbase4.apache.org port=45681 startCode=1689743683028. As of locationSeqNum=16. 2023-07-19 05:14:52,078 DEBUG [hconnection-0x6043b73e-shared-pool-1] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-19 05:14:52,091 INFO [RS-EventLoopGroup-3-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:36154, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-19 05:14:52,102 INFO [Listener at localhost/38799] hbase.HBaseTestingUtility(3484): All regions for table Group_testTableMoveTruncateAndDrop assigned to meta. Checking AM states. 2023-07-19 05:14:52,103 INFO [Listener at localhost/38799] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-19 05:14:52,104 INFO [Listener at localhost/38799] hbase.HBaseTestingUtility(3504): All regions for table Group_testTableMoveTruncateAndDrop assigned. 2023-07-19 05:14:52,105 INFO [Listener at localhost/38799] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-19 05:14:52,111 DEBUG [Listener at localhost/38799] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-19 05:14:52,114 INFO [RS-EventLoopGroup-4-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:55520, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-19 05:14:52,117 DEBUG [Listener at localhost/38799] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-19 05:14:52,119 INFO [RS-EventLoopGroup-5-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:49950, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-19 05:14:52,120 DEBUG [Listener at localhost/38799] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-19 05:14:52,123 INFO [RS-EventLoopGroup-7-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:57848, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-19 05:14:52,124 DEBUG [Listener at localhost/38799] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-19 05:14:52,126 INFO [RS-EventLoopGroup-3-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:36156, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-19 05:14:52,139 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=Group_testTableMoveTruncateAndDrop 2023-07-19 05:14:52,139 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-19 05:14:52,139 INFO [Listener at localhost/38799] rsgroup.TestRSGroupsAdmin1(307): Moving table Group_testTableMoveTruncateAndDrop to Group_testTableMoveTruncateAndDrop_2021517430 2023-07-19 05:14:52,149 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [Group_testTableMoveTruncateAndDrop] to rsgroup Group_testTableMoveTruncateAndDrop_2021517430 2023-07-19 05:14:52,156 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 05:14:52,156 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testTableMoveTruncateAndDrop_2021517430 2023-07-19 05:14:52,157 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-19 05:14:52,158 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-19 05:14:52,163 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminServer(339): Moving region(s) for table Group_testTableMoveTruncateAndDrop to RSGroup Group_testTableMoveTruncateAndDrop_2021517430 2023-07-19 05:14:52,163 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminServer(345): Moving region 329b1dced0c6dac68200f800600fec43 to RSGroup Group_testTableMoveTruncateAndDrop_2021517430 2023-07-19 05:14:52,163 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-19 05:14:52,163 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-19 05:14:52,163 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-19 05:14:52,163 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-19 05:14:52,163 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-19 05:14:52,165 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] procedure2.ProcedureExecutor(1029): Stored pid=32, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=329b1dced0c6dac68200f800600fec43, REOPEN/MOVE 2023-07-19 05:14:52,165 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminServer(345): Moving region 6790c8e1d988dd7656ea21c548741328 to RSGroup Group_testTableMoveTruncateAndDrop_2021517430 2023-07-19 05:14:52,166 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=32, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=329b1dced0c6dac68200f800600fec43, REOPEN/MOVE 2023-07-19 05:14:52,166 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-19 05:14:52,166 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-19 05:14:52,166 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-19 05:14:52,166 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-19 05:14:52,166 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-19 05:14:52,167 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=32 updating hbase:meta row=329b1dced0c6dac68200f800600fec43, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,43237,1689743687175 2023-07-19 05:14:52,167 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,,1689743689823.329b1dced0c6dac68200f800600fec43.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689743692167"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689743692167"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689743692167"}]},"ts":"1689743692167"} 2023-07-19 05:14:52,168 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] procedure2.ProcedureExecutor(1029): Stored pid=33, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=6790c8e1d988dd7656ea21c548741328, REOPEN/MOVE 2023-07-19 05:14:52,168 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminServer(345): Moving region 246041b22a4c0c68769b13202d5db25c to RSGroup Group_testTableMoveTruncateAndDrop_2021517430 2023-07-19 05:14:52,169 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=33, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=6790c8e1d988dd7656ea21c548741328, REOPEN/MOVE 2023-07-19 05:14:52,169 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-19 05:14:52,169 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-19 05:14:52,169 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-19 05:14:52,169 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-19 05:14:52,169 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-19 05:14:52,170 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=35, ppid=32, state=RUNNABLE; CloseRegionProcedure 329b1dced0c6dac68200f800600fec43, server=jenkins-hbase4.apache.org,43237,1689743687175}] 2023-07-19 05:14:52,171 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=33 updating hbase:meta row=6790c8e1d988dd7656ea21c548741328, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,45681,1689743683028 2023-07-19 05:14:52,171 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689743689823.6790c8e1d988dd7656ea21c548741328.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689743692171"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689743692171"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689743692171"}]},"ts":"1689743692171"} 2023-07-19 05:14:52,174 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=36, ppid=33, state=RUNNABLE; CloseRegionProcedure 6790c8e1d988dd7656ea21c548741328, server=jenkins-hbase4.apache.org,45681,1689743683028}] 2023-07-19 05:14:52,177 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] procedure2.ProcedureExecutor(1029): Stored pid=34, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=246041b22a4c0c68769b13202d5db25c, REOPEN/MOVE 2023-07-19 05:14:52,179 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=34, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=246041b22a4c0c68769b13202d5db25c, REOPEN/MOVE 2023-07-19 05:14:52,180 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminServer(345): Moving region 724483523107c4b407be49a4be4a8c59 to RSGroup Group_testTableMoveTruncateAndDrop_2021517430 2023-07-19 05:14:52,180 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-19 05:14:52,180 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-19 05:14:52,180 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-19 05:14:52,210 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-19 05:14:52,210 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-19 05:14:52,211 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] procedure2.ProcedureExecutor(1029): Stored pid=37, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=724483523107c4b407be49a4be4a8c59, REOPEN/MOVE 2023-07-19 05:14:52,211 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminServer(345): Moving region 579d97cc3415548daa398aa865f7d496 to RSGroup Group_testTableMoveTruncateAndDrop_2021517430 2023-07-19 05:14:52,213 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=37, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=724483523107c4b407be49a4be4a8c59, REOPEN/MOVE 2023-07-19 05:14:52,208 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=34 updating hbase:meta row=246041b22a4c0c68769b13202d5db25c, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,43237,1689743687175 2023-07-19 05:14:52,213 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689743689823.246041b22a4c0c68769b13202d5db25c.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689743692180"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689743692180"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689743692180"}]},"ts":"1689743692180"} 2023-07-19 05:14:52,213 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-19 05:14:52,214 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-19 05:14:52,214 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-19 05:14:52,214 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-19 05:14:52,214 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-19 05:14:52,215 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=37 updating hbase:meta row=724483523107c4b407be49a4be4a8c59, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,45681,1689743683028 2023-07-19 05:14:52,215 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689743689823.724483523107c4b407be49a4be4a8c59.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689743692215"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689743692215"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689743692215"}]},"ts":"1689743692215"} 2023-07-19 05:14:52,216 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=39, ppid=34, state=RUNNABLE; CloseRegionProcedure 246041b22a4c0c68769b13202d5db25c, server=jenkins-hbase4.apache.org,43237,1689743687175}] 2023-07-19 05:14:52,216 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] procedure2.ProcedureExecutor(1029): Stored pid=38, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=579d97cc3415548daa398aa865f7d496, REOPEN/MOVE 2023-07-19 05:14:52,216 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminServer(286): Moving 5 region(s) to group Group_testTableMoveTruncateAndDrop_2021517430, current retry=0 2023-07-19 05:14:52,217 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=38, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=579d97cc3415548daa398aa865f7d496, REOPEN/MOVE 2023-07-19 05:14:52,220 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=38 updating hbase:meta row=579d97cc3415548daa398aa865f7d496, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,43237,1689743687175 2023-07-19 05:14:52,220 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689743689823.579d97cc3415548daa398aa865f7d496.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689743692220"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689743692220"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689743692220"}]},"ts":"1689743692220"} 2023-07-19 05:14:52,220 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=40, ppid=37, state=RUNNABLE; CloseRegionProcedure 724483523107c4b407be49a4be4a8c59, server=jenkins-hbase4.apache.org,45681,1689743683028}] 2023-07-19 05:14:52,223 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=41, ppid=38, state=RUNNABLE; CloseRegionProcedure 579d97cc3415548daa398aa865f7d496, server=jenkins-hbase4.apache.org,43237,1689743687175}] 2023-07-19 05:14:52,362 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 246041b22a4c0c68769b13202d5db25c 2023-07-19 05:14:52,367 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 246041b22a4c0c68769b13202d5db25c, disabling compactions & flushes 2023-07-19 05:14:52,367 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689743689823.246041b22a4c0c68769b13202d5db25c. 2023-07-19 05:14:52,367 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 724483523107c4b407be49a4be4a8c59 2023-07-19 05:14:52,367 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689743689823.246041b22a4c0c68769b13202d5db25c. 2023-07-19 05:14:52,367 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689743689823.246041b22a4c0c68769b13202d5db25c. after waiting 0 ms 2023-07-19 05:14:52,368 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 724483523107c4b407be49a4be4a8c59, disabling compactions & flushes 2023-07-19 05:14:52,368 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689743689823.246041b22a4c0c68769b13202d5db25c. 2023-07-19 05:14:52,368 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689743689823.724483523107c4b407be49a4be4a8c59. 2023-07-19 05:14:52,368 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689743689823.724483523107c4b407be49a4be4a8c59. 2023-07-19 05:14:52,368 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689743689823.724483523107c4b407be49a4be4a8c59. after waiting 0 ms 2023-07-19 05:14:52,368 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689743689823.724483523107c4b407be49a4be4a8c59. 2023-07-19 05:14:52,377 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/data/default/Group_testTableMoveTruncateAndDrop/246041b22a4c0c68769b13202d5db25c/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-19 05:14:52,377 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/data/default/Group_testTableMoveTruncateAndDrop/724483523107c4b407be49a4be4a8c59/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-19 05:14:52,379 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689743689823.246041b22a4c0c68769b13202d5db25c. 2023-07-19 05:14:52,379 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 246041b22a4c0c68769b13202d5db25c: 2023-07-19 05:14:52,380 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 246041b22a4c0c68769b13202d5db25c move to jenkins-hbase4.apache.org,41899,1689743683228 record at close sequenceid=2 2023-07-19 05:14:52,381 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689743689823.724483523107c4b407be49a4be4a8c59. 2023-07-19 05:14:52,381 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 724483523107c4b407be49a4be4a8c59: 2023-07-19 05:14:52,381 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 724483523107c4b407be49a4be4a8c59 move to jenkins-hbase4.apache.org,41899,1689743683228 record at close sequenceid=2 2023-07-19 05:14:52,385 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 246041b22a4c0c68769b13202d5db25c 2023-07-19 05:14:52,385 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 579d97cc3415548daa398aa865f7d496 2023-07-19 05:14:52,386 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=34 updating hbase:meta row=246041b22a4c0c68769b13202d5db25c, regionState=CLOSED 2023-07-19 05:14:52,386 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689743689823.246041b22a4c0c68769b13202d5db25c.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689743692386"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689743692386"}]},"ts":"1689743692386"} 2023-07-19 05:14:52,386 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 724483523107c4b407be49a4be4a8c59 2023-07-19 05:14:52,386 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 6790c8e1d988dd7656ea21c548741328 2023-07-19 05:14:52,392 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 579d97cc3415548daa398aa865f7d496, disabling compactions & flushes 2023-07-19 05:14:52,392 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,zzzzz,1689743689823.579d97cc3415548daa398aa865f7d496. 2023-07-19 05:14:52,392 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689743689823.579d97cc3415548daa398aa865f7d496. 2023-07-19 05:14:52,392 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689743689823.579d97cc3415548daa398aa865f7d496. after waiting 0 ms 2023-07-19 05:14:52,392 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,zzzzz,1689743689823.579d97cc3415548daa398aa865f7d496. 2023-07-19 05:14:52,393 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=39, resume processing ppid=34 2023-07-19 05:14:52,392 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 6790c8e1d988dd7656ea21c548741328, disabling compactions & flushes 2023-07-19 05:14:52,393 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=39, ppid=34, state=SUCCESS; CloseRegionProcedure 246041b22a4c0c68769b13202d5db25c, server=jenkins-hbase4.apache.org,43237,1689743687175 in 172 msec 2023-07-19 05:14:52,393 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,aaaaa,1689743689823.6790c8e1d988dd7656ea21c548741328. 2023-07-19 05:14:52,393 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689743689823.6790c8e1d988dd7656ea21c548741328. 2023-07-19 05:14:52,393 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689743689823.6790c8e1d988dd7656ea21c548741328. after waiting 0 ms 2023-07-19 05:14:52,393 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,aaaaa,1689743689823.6790c8e1d988dd7656ea21c548741328. 2023-07-19 05:14:52,395 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=37 updating hbase:meta row=724483523107c4b407be49a4be4a8c59, regionState=CLOSED 2023-07-19 05:14:52,395 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=34, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=246041b22a4c0c68769b13202d5db25c, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,41899,1689743683228; forceNewPlan=false, retain=false 2023-07-19 05:14:52,395 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689743689823.724483523107c4b407be49a4be4a8c59.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689743692395"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689743692395"}]},"ts":"1689743692395"} 2023-07-19 05:14:52,400 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=40, resume processing ppid=37 2023-07-19 05:14:52,400 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=40, ppid=37, state=SUCCESS; CloseRegionProcedure 724483523107c4b407be49a4be4a8c59, server=jenkins-hbase4.apache.org,45681,1689743683028 in 177 msec 2023-07-19 05:14:52,401 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=37, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=724483523107c4b407be49a4be4a8c59, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,41899,1689743683228; forceNewPlan=false, retain=false 2023-07-19 05:14:52,415 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/data/default/Group_testTableMoveTruncateAndDrop/6790c8e1d988dd7656ea21c548741328/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-19 05:14:52,419 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,aaaaa,1689743689823.6790c8e1d988dd7656ea21c548741328. 2023-07-19 05:14:52,419 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 6790c8e1d988dd7656ea21c548741328: 2023-07-19 05:14:52,419 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 6790c8e1d988dd7656ea21c548741328 move to jenkins-hbase4.apache.org,41979,1689743683435 record at close sequenceid=2 2023-07-19 05:14:52,420 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/data/default/Group_testTableMoveTruncateAndDrop/579d97cc3415548daa398aa865f7d496/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-19 05:14:52,423 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,zzzzz,1689743689823.579d97cc3415548daa398aa865f7d496. 2023-07-19 05:14:52,423 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 579d97cc3415548daa398aa865f7d496: 2023-07-19 05:14:52,423 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 579d97cc3415548daa398aa865f7d496 move to jenkins-hbase4.apache.org,41899,1689743683228 record at close sequenceid=2 2023-07-19 05:14:52,435 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 6790c8e1d988dd7656ea21c548741328 2023-07-19 05:14:52,437 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=33 updating hbase:meta row=6790c8e1d988dd7656ea21c548741328, regionState=CLOSED 2023-07-19 05:14:52,438 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689743689823.6790c8e1d988dd7656ea21c548741328.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689743692437"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689743692437"}]},"ts":"1689743692437"} 2023-07-19 05:14:52,439 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=38 updating hbase:meta row=579d97cc3415548daa398aa865f7d496, regionState=CLOSED 2023-07-19 05:14:52,440 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 579d97cc3415548daa398aa865f7d496 2023-07-19 05:14:52,440 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689743689823.579d97cc3415548daa398aa865f7d496.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689743692439"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689743692439"}]},"ts":"1689743692439"} 2023-07-19 05:14:52,440 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 329b1dced0c6dac68200f800600fec43 2023-07-19 05:14:52,441 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 329b1dced0c6dac68200f800600fec43, disabling compactions & flushes 2023-07-19 05:14:52,441 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,,1689743689823.329b1dced0c6dac68200f800600fec43. 2023-07-19 05:14:52,441 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,,1689743689823.329b1dced0c6dac68200f800600fec43. 2023-07-19 05:14:52,441 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,,1689743689823.329b1dced0c6dac68200f800600fec43. after waiting 0 ms 2023-07-19 05:14:52,441 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,,1689743689823.329b1dced0c6dac68200f800600fec43. 2023-07-19 05:14:52,445 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=36, resume processing ppid=33 2023-07-19 05:14:52,445 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=36, ppid=33, state=SUCCESS; CloseRegionProcedure 6790c8e1d988dd7656ea21c548741328, server=jenkins-hbase4.apache.org,45681,1689743683028 in 268 msec 2023-07-19 05:14:52,446 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=41, resume processing ppid=38 2023-07-19 05:14:52,446 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=41, ppid=38, state=SUCCESS; CloseRegionProcedure 579d97cc3415548daa398aa865f7d496, server=jenkins-hbase4.apache.org,43237,1689743687175 in 220 msec 2023-07-19 05:14:52,447 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=33, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=6790c8e1d988dd7656ea21c548741328, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,41979,1689743683435; forceNewPlan=false, retain=false 2023-07-19 05:14:52,448 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=38, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=579d97cc3415548daa398aa865f7d496, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,41899,1689743683228; forceNewPlan=false, retain=false 2023-07-19 05:14:52,464 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/data/default/Group_testTableMoveTruncateAndDrop/329b1dced0c6dac68200f800600fec43/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-19 05:14:52,466 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,,1689743689823.329b1dced0c6dac68200f800600fec43. 2023-07-19 05:14:52,466 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 329b1dced0c6dac68200f800600fec43: 2023-07-19 05:14:52,466 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 329b1dced0c6dac68200f800600fec43 move to jenkins-hbase4.apache.org,41979,1689743683435 record at close sequenceid=2 2023-07-19 05:14:52,469 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 329b1dced0c6dac68200f800600fec43 2023-07-19 05:14:52,469 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=32 updating hbase:meta row=329b1dced0c6dac68200f800600fec43, regionState=CLOSED 2023-07-19 05:14:52,470 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,,1689743689823.329b1dced0c6dac68200f800600fec43.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689743692469"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689743692469"}]},"ts":"1689743692469"} 2023-07-19 05:14:52,491 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=35, resume processing ppid=32 2023-07-19 05:14:52,491 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=35, ppid=32, state=SUCCESS; CloseRegionProcedure 329b1dced0c6dac68200f800600fec43, server=jenkins-hbase4.apache.org,43237,1689743687175 in 302 msec 2023-07-19 05:14:52,505 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=32, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=329b1dced0c6dac68200f800600fec43, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,41979,1689743683435; forceNewPlan=false, retain=false 2023-07-19 05:14:52,545 INFO [jenkins-hbase4:35853] balancer.BaseLoadBalancer(1545): Reassigned 5 regions. 5 retained the pre-restart assignment. 2023-07-19 05:14:52,546 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=38 updating hbase:meta row=579d97cc3415548daa398aa865f7d496, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,41899,1689743683228 2023-07-19 05:14:52,546 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=34 updating hbase:meta row=246041b22a4c0c68769b13202d5db25c, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,41899,1689743683228 2023-07-19 05:14:52,546 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=33 updating hbase:meta row=6790c8e1d988dd7656ea21c548741328, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,41979,1689743683435 2023-07-19 05:14:52,547 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=32 updating hbase:meta row=329b1dced0c6dac68200f800600fec43, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,41979,1689743683435 2023-07-19 05:14:52,547 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689743689823.6790c8e1d988dd7656ea21c548741328.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689743692546"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689743692546"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689743692546"}]},"ts":"1689743692546"} 2023-07-19 05:14:52,548 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,,1689743689823.329b1dced0c6dac68200f800600fec43.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689743692547"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689743692547"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689743692547"}]},"ts":"1689743692547"} 2023-07-19 05:14:52,546 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689743689823.579d97cc3415548daa398aa865f7d496.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689743692546"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689743692546"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689743692546"}]},"ts":"1689743692546"} 2023-07-19 05:14:52,547 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=37 updating hbase:meta row=724483523107c4b407be49a4be4a8c59, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,41899,1689743683228 2023-07-19 05:14:52,547 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689743689823.246041b22a4c0c68769b13202d5db25c.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689743692546"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689743692546"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689743692546"}]},"ts":"1689743692546"} 2023-07-19 05:14:52,548 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689743689823.724483523107c4b407be49a4be4a8c59.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689743692547"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689743692547"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689743692547"}]},"ts":"1689743692547"} 2023-07-19 05:14:52,551 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=42, ppid=32, state=RUNNABLE; OpenRegionProcedure 329b1dced0c6dac68200f800600fec43, server=jenkins-hbase4.apache.org,41979,1689743683435}] 2023-07-19 05:14:52,553 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=43, ppid=33, state=RUNNABLE; OpenRegionProcedure 6790c8e1d988dd7656ea21c548741328, server=jenkins-hbase4.apache.org,41979,1689743683435}] 2023-07-19 05:14:52,561 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=44, ppid=38, state=RUNNABLE; OpenRegionProcedure 579d97cc3415548daa398aa865f7d496, server=jenkins-hbase4.apache.org,41899,1689743683228}] 2023-07-19 05:14:52,563 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=45, ppid=34, state=RUNNABLE; OpenRegionProcedure 246041b22a4c0c68769b13202d5db25c, server=jenkins-hbase4.apache.org,41899,1689743683228}] 2023-07-19 05:14:52,570 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=46, ppid=37, state=RUNNABLE; OpenRegionProcedure 724483523107c4b407be49a4be4a8c59, server=jenkins-hbase4.apache.org,41899,1689743683228}] 2023-07-19 05:14:52,709 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,,1689743689823.329b1dced0c6dac68200f800600fec43. 2023-07-19 05:14:52,710 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 329b1dced0c6dac68200f800600fec43, NAME => 'Group_testTableMoveTruncateAndDrop,,1689743689823.329b1dced0c6dac68200f800600fec43.', STARTKEY => '', ENDKEY => 'aaaaa'} 2023-07-19 05:14:52,710 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 329b1dced0c6dac68200f800600fec43 2023-07-19 05:14:52,710 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,,1689743689823.329b1dced0c6dac68200f800600fec43.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-19 05:14:52,710 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 329b1dced0c6dac68200f800600fec43 2023-07-19 05:14:52,710 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 329b1dced0c6dac68200f800600fec43 2023-07-19 05:14:52,722 INFO [StoreOpener-329b1dced0c6dac68200f800600fec43-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 329b1dced0c6dac68200f800600fec43 2023-07-19 05:14:52,723 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689743689823.724483523107c4b407be49a4be4a8c59. 2023-07-19 05:14:52,723 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 724483523107c4b407be49a4be4a8c59, NAME => 'Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689743689823.724483523107c4b407be49a4be4a8c59.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'} 2023-07-19 05:14:52,723 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 724483523107c4b407be49a4be4a8c59 2023-07-19 05:14:52,723 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689743689823.724483523107c4b407be49a4be4a8c59.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-19 05:14:52,723 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 724483523107c4b407be49a4be4a8c59 2023-07-19 05:14:52,723 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 724483523107c4b407be49a4be4a8c59 2023-07-19 05:14:52,725 INFO [StoreOpener-724483523107c4b407be49a4be4a8c59-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 724483523107c4b407be49a4be4a8c59 2023-07-19 05:14:52,727 DEBUG [StoreOpener-724483523107c4b407be49a4be4a8c59-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/data/default/Group_testTableMoveTruncateAndDrop/724483523107c4b407be49a4be4a8c59/f 2023-07-19 05:14:52,728 DEBUG [StoreOpener-724483523107c4b407be49a4be4a8c59-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/data/default/Group_testTableMoveTruncateAndDrop/724483523107c4b407be49a4be4a8c59/f 2023-07-19 05:14:52,728 INFO [StoreOpener-724483523107c4b407be49a4be4a8c59-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 724483523107c4b407be49a4be4a8c59 columnFamilyName f 2023-07-19 05:14:52,729 DEBUG [StoreOpener-329b1dced0c6dac68200f800600fec43-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/data/default/Group_testTableMoveTruncateAndDrop/329b1dced0c6dac68200f800600fec43/f 2023-07-19 05:14:52,729 DEBUG [StoreOpener-329b1dced0c6dac68200f800600fec43-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/data/default/Group_testTableMoveTruncateAndDrop/329b1dced0c6dac68200f800600fec43/f 2023-07-19 05:14:52,729 INFO [StoreOpener-724483523107c4b407be49a4be4a8c59-1] regionserver.HStore(310): Store=724483523107c4b407be49a4be4a8c59/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-19 05:14:52,729 INFO [StoreOpener-329b1dced0c6dac68200f800600fec43-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 329b1dced0c6dac68200f800600fec43 columnFamilyName f 2023-07-19 05:14:52,730 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/data/default/Group_testTableMoveTruncateAndDrop/724483523107c4b407be49a4be4a8c59 2023-07-19 05:14:52,730 INFO [StoreOpener-329b1dced0c6dac68200f800600fec43-1] regionserver.HStore(310): Store=329b1dced0c6dac68200f800600fec43/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-19 05:14:52,731 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/data/default/Group_testTableMoveTruncateAndDrop/724483523107c4b407be49a4be4a8c59 2023-07-19 05:14:52,731 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/data/default/Group_testTableMoveTruncateAndDrop/329b1dced0c6dac68200f800600fec43 2023-07-19 05:14:52,733 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/data/default/Group_testTableMoveTruncateAndDrop/329b1dced0c6dac68200f800600fec43 2023-07-19 05:14:52,737 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 724483523107c4b407be49a4be4a8c59 2023-07-19 05:14:52,739 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 329b1dced0c6dac68200f800600fec43 2023-07-19 05:14:52,739 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 724483523107c4b407be49a4be4a8c59; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11477415200, jitterRate=0.06891758739948273}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-19 05:14:52,739 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 724483523107c4b407be49a4be4a8c59: 2023-07-19 05:14:52,740 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 329b1dced0c6dac68200f800600fec43; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11223124800, jitterRate=0.04523494839668274}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-19 05:14:52,740 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689743689823.724483523107c4b407be49a4be4a8c59., pid=46, masterSystemTime=1689743692715 2023-07-19 05:14:52,740 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 329b1dced0c6dac68200f800600fec43: 2023-07-19 05:14:52,742 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,,1689743689823.329b1dced0c6dac68200f800600fec43., pid=42, masterSystemTime=1689743692704 2023-07-19 05:14:52,744 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689743689823.724483523107c4b407be49a4be4a8c59. 2023-07-19 05:14:52,744 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689743689823.724483523107c4b407be49a4be4a8c59. 2023-07-19 05:14:52,744 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689743689823.246041b22a4c0c68769b13202d5db25c. 2023-07-19 05:14:52,744 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 246041b22a4c0c68769b13202d5db25c, NAME => 'Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689743689823.246041b22a4c0c68769b13202d5db25c.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'} 2023-07-19 05:14:52,745 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=37 updating hbase:meta row=724483523107c4b407be49a4be4a8c59, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,41899,1689743683228 2023-07-19 05:14:52,745 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 246041b22a4c0c68769b13202d5db25c 2023-07-19 05:14:52,745 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689743689823.246041b22a4c0c68769b13202d5db25c.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-19 05:14:52,745 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 246041b22a4c0c68769b13202d5db25c 2023-07-19 05:14:52,745 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 246041b22a4c0c68769b13202d5db25c 2023-07-19 05:14:52,745 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689743689823.724483523107c4b407be49a4be4a8c59.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689743692744"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689743692744"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689743692744"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689743692744"}]},"ts":"1689743692744"} 2023-07-19 05:14:52,747 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,,1689743689823.329b1dced0c6dac68200f800600fec43. 2023-07-19 05:14:52,747 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,,1689743689823.329b1dced0c6dac68200f800600fec43. 2023-07-19 05:14:52,747 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,aaaaa,1689743689823.6790c8e1d988dd7656ea21c548741328. 2023-07-19 05:14:52,748 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=32 updating hbase:meta row=329b1dced0c6dac68200f800600fec43, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,41979,1689743683435 2023-07-19 05:14:52,748 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,,1689743689823.329b1dced0c6dac68200f800600fec43.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689743692747"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689743692747"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689743692747"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689743692747"}]},"ts":"1689743692747"} 2023-07-19 05:14:52,748 INFO [StoreOpener-246041b22a4c0c68769b13202d5db25c-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 246041b22a4c0c68769b13202d5db25c 2023-07-19 05:14:52,749 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 6790c8e1d988dd7656ea21c548741328, NAME => 'Group_testTableMoveTruncateAndDrop,aaaaa,1689743689823.6790c8e1d988dd7656ea21c548741328.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'} 2023-07-19 05:14:52,749 DEBUG [StoreOpener-246041b22a4c0c68769b13202d5db25c-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/data/default/Group_testTableMoveTruncateAndDrop/246041b22a4c0c68769b13202d5db25c/f 2023-07-19 05:14:52,750 DEBUG [StoreOpener-246041b22a4c0c68769b13202d5db25c-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/data/default/Group_testTableMoveTruncateAndDrop/246041b22a4c0c68769b13202d5db25c/f 2023-07-19 05:14:52,750 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 6790c8e1d988dd7656ea21c548741328 2023-07-19 05:14:52,750 INFO [StoreOpener-246041b22a4c0c68769b13202d5db25c-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 246041b22a4c0c68769b13202d5db25c columnFamilyName f 2023-07-19 05:14:52,751 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,aaaaa,1689743689823.6790c8e1d988dd7656ea21c548741328.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-19 05:14:52,751 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=46, resume processing ppid=37 2023-07-19 05:14:52,751 INFO [StoreOpener-246041b22a4c0c68769b13202d5db25c-1] regionserver.HStore(310): Store=246041b22a4c0c68769b13202d5db25c/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-19 05:14:52,751 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 6790c8e1d988dd7656ea21c548741328 2023-07-19 05:14:52,751 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=46, ppid=37, state=SUCCESS; OpenRegionProcedure 724483523107c4b407be49a4be4a8c59, server=jenkins-hbase4.apache.org,41899,1689743683228 in 178 msec 2023-07-19 05:14:52,752 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 6790c8e1d988dd7656ea21c548741328 2023-07-19 05:14:52,753 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/data/default/Group_testTableMoveTruncateAndDrop/246041b22a4c0c68769b13202d5db25c 2023-07-19 05:14:52,755 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=42, resume processing ppid=32 2023-07-19 05:14:52,755 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=37, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=724483523107c4b407be49a4be4a8c59, REOPEN/MOVE in 541 msec 2023-07-19 05:14:52,755 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=42, ppid=32, state=SUCCESS; OpenRegionProcedure 329b1dced0c6dac68200f800600fec43, server=jenkins-hbase4.apache.org,41979,1689743683435 in 199 msec 2023-07-19 05:14:52,755 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/data/default/Group_testTableMoveTruncateAndDrop/246041b22a4c0c68769b13202d5db25c 2023-07-19 05:14:52,755 INFO [StoreOpener-6790c8e1d988dd7656ea21c548741328-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 6790c8e1d988dd7656ea21c548741328 2023-07-19 05:14:52,757 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=32, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=329b1dced0c6dac68200f800600fec43, REOPEN/MOVE in 591 msec 2023-07-19 05:14:52,759 DEBUG [StoreOpener-6790c8e1d988dd7656ea21c548741328-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/data/default/Group_testTableMoveTruncateAndDrop/6790c8e1d988dd7656ea21c548741328/f 2023-07-19 05:14:52,759 DEBUG [StoreOpener-6790c8e1d988dd7656ea21c548741328-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/data/default/Group_testTableMoveTruncateAndDrop/6790c8e1d988dd7656ea21c548741328/f 2023-07-19 05:14:52,760 INFO [StoreOpener-6790c8e1d988dd7656ea21c548741328-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 6790c8e1d988dd7656ea21c548741328 columnFamilyName f 2023-07-19 05:14:52,760 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 246041b22a4c0c68769b13202d5db25c 2023-07-19 05:14:52,761 INFO [StoreOpener-6790c8e1d988dd7656ea21c548741328-1] regionserver.HStore(310): Store=6790c8e1d988dd7656ea21c548741328/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-19 05:14:52,762 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 246041b22a4c0c68769b13202d5db25c; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10252893120, jitterRate=-0.04512491822242737}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-19 05:14:52,762 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 246041b22a4c0c68769b13202d5db25c: 2023-07-19 05:14:52,763 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689743689823.246041b22a4c0c68769b13202d5db25c., pid=45, masterSystemTime=1689743692715 2023-07-19 05:14:52,763 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/data/default/Group_testTableMoveTruncateAndDrop/6790c8e1d988dd7656ea21c548741328 2023-07-19 05:14:52,765 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/data/default/Group_testTableMoveTruncateAndDrop/6790c8e1d988dd7656ea21c548741328 2023-07-19 05:14:52,766 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689743689823.246041b22a4c0c68769b13202d5db25c. 2023-07-19 05:14:52,766 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689743689823.246041b22a4c0c68769b13202d5db25c. 2023-07-19 05:14:52,766 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,zzzzz,1689743689823.579d97cc3415548daa398aa865f7d496. 2023-07-19 05:14:52,766 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 579d97cc3415548daa398aa865f7d496, NAME => 'Group_testTableMoveTruncateAndDrop,zzzzz,1689743689823.579d97cc3415548daa398aa865f7d496.', STARTKEY => 'zzzzz', ENDKEY => ''} 2023-07-19 05:14:52,766 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 579d97cc3415548daa398aa865f7d496 2023-07-19 05:14:52,766 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,zzzzz,1689743689823.579d97cc3415548daa398aa865f7d496.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-19 05:14:52,767 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 579d97cc3415548daa398aa865f7d496 2023-07-19 05:14:52,767 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 579d97cc3415548daa398aa865f7d496 2023-07-19 05:14:52,767 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=34 updating hbase:meta row=246041b22a4c0c68769b13202d5db25c, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,41899,1689743683228 2023-07-19 05:14:52,767 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689743689823.246041b22a4c0c68769b13202d5db25c.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689743692767"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689743692767"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689743692767"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689743692767"}]},"ts":"1689743692767"} 2023-07-19 05:14:52,771 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 6790c8e1d988dd7656ea21c548741328 2023-07-19 05:14:52,773 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=45, resume processing ppid=34 2023-07-19 05:14:52,773 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=45, ppid=34, state=SUCCESS; OpenRegionProcedure 246041b22a4c0c68769b13202d5db25c, server=jenkins-hbase4.apache.org,41899,1689743683228 in 207 msec 2023-07-19 05:14:52,774 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 6790c8e1d988dd7656ea21c548741328; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11978185280, jitterRate=0.11555543541908264}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-19 05:14:52,774 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 6790c8e1d988dd7656ea21c548741328: 2023-07-19 05:14:52,775 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,aaaaa,1689743689823.6790c8e1d988dd7656ea21c548741328., pid=43, masterSystemTime=1689743692704 2023-07-19 05:14:52,775 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=34, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=246041b22a4c0c68769b13202d5db25c, REOPEN/MOVE in 604 msec 2023-07-19 05:14:52,778 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,aaaaa,1689743689823.6790c8e1d988dd7656ea21c548741328. 2023-07-19 05:14:52,778 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,aaaaa,1689743689823.6790c8e1d988dd7656ea21c548741328. 2023-07-19 05:14:52,778 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=33 updating hbase:meta row=6790c8e1d988dd7656ea21c548741328, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,41979,1689743683435 2023-07-19 05:14:52,778 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689743689823.6790c8e1d988dd7656ea21c548741328.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689743692778"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689743692778"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689743692778"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689743692778"}]},"ts":"1689743692778"} 2023-07-19 05:14:52,784 INFO [StoreOpener-579d97cc3415548daa398aa865f7d496-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 579d97cc3415548daa398aa865f7d496 2023-07-19 05:14:52,787 DEBUG [StoreOpener-579d97cc3415548daa398aa865f7d496-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/data/default/Group_testTableMoveTruncateAndDrop/579d97cc3415548daa398aa865f7d496/f 2023-07-19 05:14:52,787 DEBUG [StoreOpener-579d97cc3415548daa398aa865f7d496-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/data/default/Group_testTableMoveTruncateAndDrop/579d97cc3415548daa398aa865f7d496/f 2023-07-19 05:14:52,788 INFO [StoreOpener-579d97cc3415548daa398aa865f7d496-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 579d97cc3415548daa398aa865f7d496 columnFamilyName f 2023-07-19 05:14:52,789 INFO [StoreOpener-579d97cc3415548daa398aa865f7d496-1] regionserver.HStore(310): Store=579d97cc3415548daa398aa865f7d496/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-19 05:14:52,790 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/data/default/Group_testTableMoveTruncateAndDrop/579d97cc3415548daa398aa865f7d496 2023-07-19 05:14:52,792 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=43, resume processing ppid=33 2023-07-19 05:14:52,792 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/data/default/Group_testTableMoveTruncateAndDrop/579d97cc3415548daa398aa865f7d496 2023-07-19 05:14:52,792 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=43, ppid=33, state=SUCCESS; OpenRegionProcedure 6790c8e1d988dd7656ea21c548741328, server=jenkins-hbase4.apache.org,41979,1689743683435 in 234 msec 2023-07-19 05:14:52,796 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=33, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=6790c8e1d988dd7656ea21c548741328, REOPEN/MOVE in 625 msec 2023-07-19 05:14:52,796 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 579d97cc3415548daa398aa865f7d496 2023-07-19 05:14:52,798 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 579d97cc3415548daa398aa865f7d496; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10891287520, jitterRate=0.014330193400382996}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-19 05:14:52,798 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 579d97cc3415548daa398aa865f7d496: 2023-07-19 05:14:52,799 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,zzzzz,1689743689823.579d97cc3415548daa398aa865f7d496., pid=44, masterSystemTime=1689743692715 2023-07-19 05:14:52,802 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,zzzzz,1689743689823.579d97cc3415548daa398aa865f7d496. 2023-07-19 05:14:52,802 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,zzzzz,1689743689823.579d97cc3415548daa398aa865f7d496. 2023-07-19 05:14:52,803 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=38 updating hbase:meta row=579d97cc3415548daa398aa865f7d496, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,41899,1689743683228 2023-07-19 05:14:52,803 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689743689823.579d97cc3415548daa398aa865f7d496.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689743692803"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689743692803"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689743692803"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689743692803"}]},"ts":"1689743692803"} 2023-07-19 05:14:52,811 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=44, resume processing ppid=38 2023-07-19 05:14:52,811 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=44, ppid=38, state=SUCCESS; OpenRegionProcedure 579d97cc3415548daa398aa865f7d496, server=jenkins-hbase4.apache.org,41899,1689743683228 in 244 msec 2023-07-19 05:14:52,813 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=38, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=579d97cc3415548daa398aa865f7d496, REOPEN/MOVE in 597 msec 2023-07-19 05:14:53,217 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] procedure.ProcedureSyncWait(216): waitFor pid=32 2023-07-19 05:14:53,217 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminServer(369): All regions from table(s) [Group_testTableMoveTruncateAndDrop] moved to target group Group_testTableMoveTruncateAndDrop_2021517430. 2023-07-19 05:14:53,218 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-19 05:14:53,222 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-19 05:14:53,222 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-19 05:14:53,226 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=Group_testTableMoveTruncateAndDrop 2023-07-19 05:14:53,226 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-19 05:14:53,227 INFO [Listener at localhost/38799] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-19 05:14:53,233 INFO [Listener at localhost/38799] client.HBaseAdmin$15(890): Started disable of Group_testTableMoveTruncateAndDrop 2023-07-19 05:14:53,237 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable Group_testTableMoveTruncateAndDrop 2023-07-19 05:14:53,244 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] procedure2.ProcedureExecutor(1029): Stored pid=47, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-19 05:14:53,249 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689743693249"}]},"ts":"1689743693249"} 2023-07-19 05:14:53,259 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=DISABLING in hbase:meta 2023-07-19 05:14:53,259 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.MasterRpcServices(1230): Checking to see if procedure is done pid=47 2023-07-19 05:14:53,260 INFO [PEWorker-4] procedure.DisableTableProcedure(293): Set Group_testTableMoveTruncateAndDrop to state=DISABLING 2023-07-19 05:14:53,266 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=48, ppid=47, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=329b1dced0c6dac68200f800600fec43, UNASSIGN}, {pid=49, ppid=47, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=6790c8e1d988dd7656ea21c548741328, UNASSIGN}, {pid=50, ppid=47, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=246041b22a4c0c68769b13202d5db25c, UNASSIGN}, {pid=51, ppid=47, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=724483523107c4b407be49a4be4a8c59, UNASSIGN}, {pid=52, ppid=47, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=579d97cc3415548daa398aa865f7d496, UNASSIGN}] 2023-07-19 05:14:53,269 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=48, ppid=47, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=329b1dced0c6dac68200f800600fec43, UNASSIGN 2023-07-19 05:14:53,269 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=49, ppid=47, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=6790c8e1d988dd7656ea21c548741328, UNASSIGN 2023-07-19 05:14:53,270 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=50, ppid=47, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=246041b22a4c0c68769b13202d5db25c, UNASSIGN 2023-07-19 05:14:53,270 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=51, ppid=47, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=724483523107c4b407be49a4be4a8c59, UNASSIGN 2023-07-19 05:14:53,271 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=52, ppid=47, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=579d97cc3415548daa398aa865f7d496, UNASSIGN 2023-07-19 05:14:53,271 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=48 updating hbase:meta row=329b1dced0c6dac68200f800600fec43, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,41979,1689743683435 2023-07-19 05:14:53,271 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=49 updating hbase:meta row=6790c8e1d988dd7656ea21c548741328, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,41979,1689743683435 2023-07-19 05:14:53,272 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,,1689743689823.329b1dced0c6dac68200f800600fec43.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689743693271"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689743693271"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689743693271"}]},"ts":"1689743693271"} 2023-07-19 05:14:53,272 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689743689823.6790c8e1d988dd7656ea21c548741328.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689743693271"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689743693271"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689743693271"}]},"ts":"1689743693271"} 2023-07-19 05:14:53,272 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=51 updating hbase:meta row=724483523107c4b407be49a4be4a8c59, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,41899,1689743683228 2023-07-19 05:14:53,272 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689743689823.724483523107c4b407be49a4be4a8c59.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689743693271"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689743693271"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689743693271"}]},"ts":"1689743693271"} 2023-07-19 05:14:53,273 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=50 updating hbase:meta row=246041b22a4c0c68769b13202d5db25c, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,41899,1689743683228 2023-07-19 05:14:53,274 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689743689823.246041b22a4c0c68769b13202d5db25c.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689743693273"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689743693273"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689743693273"}]},"ts":"1689743693273"} 2023-07-19 05:14:53,274 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=52 updating hbase:meta row=579d97cc3415548daa398aa865f7d496, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,41899,1689743683228 2023-07-19 05:14:53,274 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689743689823.579d97cc3415548daa398aa865f7d496.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689743693274"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689743693274"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689743693274"}]},"ts":"1689743693274"} 2023-07-19 05:14:53,278 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=53, ppid=48, state=RUNNABLE; CloseRegionProcedure 329b1dced0c6dac68200f800600fec43, server=jenkins-hbase4.apache.org,41979,1689743683435}] 2023-07-19 05:14:53,280 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=54, ppid=49, state=RUNNABLE; CloseRegionProcedure 6790c8e1d988dd7656ea21c548741328, server=jenkins-hbase4.apache.org,41979,1689743683435}] 2023-07-19 05:14:53,281 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=55, ppid=51, state=RUNNABLE; CloseRegionProcedure 724483523107c4b407be49a4be4a8c59, server=jenkins-hbase4.apache.org,41899,1689743683228}] 2023-07-19 05:14:53,283 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=56, ppid=50, state=RUNNABLE; CloseRegionProcedure 246041b22a4c0c68769b13202d5db25c, server=jenkins-hbase4.apache.org,41899,1689743683228}] 2023-07-19 05:14:53,283 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=57, ppid=52, state=RUNNABLE; CloseRegionProcedure 579d97cc3415548daa398aa865f7d496, server=jenkins-hbase4.apache.org,41899,1689743683228}] 2023-07-19 05:14:53,360 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.MasterRpcServices(1230): Checking to see if procedure is done pid=47 2023-07-19 05:14:53,432 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 329b1dced0c6dac68200f800600fec43 2023-07-19 05:14:53,433 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 329b1dced0c6dac68200f800600fec43, disabling compactions & flushes 2023-07-19 05:14:53,433 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,,1689743689823.329b1dced0c6dac68200f800600fec43. 2023-07-19 05:14:53,433 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,,1689743689823.329b1dced0c6dac68200f800600fec43. 2023-07-19 05:14:53,433 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,,1689743689823.329b1dced0c6dac68200f800600fec43. after waiting 0 ms 2023-07-19 05:14:53,433 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,,1689743689823.329b1dced0c6dac68200f800600fec43. 2023-07-19 05:14:53,436 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 579d97cc3415548daa398aa865f7d496 2023-07-19 05:14:53,437 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 579d97cc3415548daa398aa865f7d496, disabling compactions & flushes 2023-07-19 05:14:53,437 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,zzzzz,1689743689823.579d97cc3415548daa398aa865f7d496. 2023-07-19 05:14:53,437 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689743689823.579d97cc3415548daa398aa865f7d496. 2023-07-19 05:14:53,437 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689743689823.579d97cc3415548daa398aa865f7d496. after waiting 0 ms 2023-07-19 05:14:53,437 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,zzzzz,1689743689823.579d97cc3415548daa398aa865f7d496. 2023-07-19 05:14:53,442 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/data/default/Group_testTableMoveTruncateAndDrop/329b1dced0c6dac68200f800600fec43/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-19 05:14:53,443 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,,1689743689823.329b1dced0c6dac68200f800600fec43. 2023-07-19 05:14:53,443 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 329b1dced0c6dac68200f800600fec43: 2023-07-19 05:14:53,445 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/data/default/Group_testTableMoveTruncateAndDrop/579d97cc3415548daa398aa865f7d496/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-19 05:14:53,445 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,zzzzz,1689743689823.579d97cc3415548daa398aa865f7d496. 2023-07-19 05:14:53,445 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 579d97cc3415548daa398aa865f7d496: 2023-07-19 05:14:53,447 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 329b1dced0c6dac68200f800600fec43 2023-07-19 05:14:53,447 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 6790c8e1d988dd7656ea21c548741328 2023-07-19 05:14:53,448 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 6790c8e1d988dd7656ea21c548741328, disabling compactions & flushes 2023-07-19 05:14:53,448 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,aaaaa,1689743689823.6790c8e1d988dd7656ea21c548741328. 2023-07-19 05:14:53,449 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689743689823.6790c8e1d988dd7656ea21c548741328. 2023-07-19 05:14:53,449 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689743689823.6790c8e1d988dd7656ea21c548741328. after waiting 0 ms 2023-07-19 05:14:53,449 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,aaaaa,1689743689823.6790c8e1d988dd7656ea21c548741328. 2023-07-19 05:14:53,449 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=48 updating hbase:meta row=329b1dced0c6dac68200f800600fec43, regionState=CLOSED 2023-07-19 05:14:53,449 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,,1689743689823.329b1dced0c6dac68200f800600fec43.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689743693449"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689743693449"}]},"ts":"1689743693449"} 2023-07-19 05:14:53,449 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 579d97cc3415548daa398aa865f7d496 2023-07-19 05:14:53,450 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 246041b22a4c0c68769b13202d5db25c 2023-07-19 05:14:53,452 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 246041b22a4c0c68769b13202d5db25c, disabling compactions & flushes 2023-07-19 05:14:53,452 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689743689823.246041b22a4c0c68769b13202d5db25c. 2023-07-19 05:14:53,452 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689743689823.246041b22a4c0c68769b13202d5db25c. 2023-07-19 05:14:53,452 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689743689823.246041b22a4c0c68769b13202d5db25c. after waiting 0 ms 2023-07-19 05:14:53,452 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689743689823.246041b22a4c0c68769b13202d5db25c. 2023-07-19 05:14:53,456 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=52 updating hbase:meta row=579d97cc3415548daa398aa865f7d496, regionState=CLOSED 2023-07-19 05:14:53,456 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689743689823.579d97cc3415548daa398aa865f7d496.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689743693456"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689743693456"}]},"ts":"1689743693456"} 2023-07-19 05:14:53,460 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=53, resume processing ppid=48 2023-07-19 05:14:53,460 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=53, ppid=48, state=SUCCESS; CloseRegionProcedure 329b1dced0c6dac68200f800600fec43, server=jenkins-hbase4.apache.org,41979,1689743683435 in 178 msec 2023-07-19 05:14:53,464 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=57, resume processing ppid=52 2023-07-19 05:14:53,464 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=48, ppid=47, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=329b1dced0c6dac68200f800600fec43, UNASSIGN in 198 msec 2023-07-19 05:14:53,464 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=57, ppid=52, state=SUCCESS; CloseRegionProcedure 579d97cc3415548daa398aa865f7d496, server=jenkins-hbase4.apache.org,41899,1689743683228 in 175 msec 2023-07-19 05:14:53,469 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=52, ppid=47, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=579d97cc3415548daa398aa865f7d496, UNASSIGN in 198 msec 2023-07-19 05:14:53,475 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/data/default/Group_testTableMoveTruncateAndDrop/246041b22a4c0c68769b13202d5db25c/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-19 05:14:53,476 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/data/default/Group_testTableMoveTruncateAndDrop/6790c8e1d988dd7656ea21c548741328/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-19 05:14:53,476 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689743689823.246041b22a4c0c68769b13202d5db25c. 2023-07-19 05:14:53,476 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 246041b22a4c0c68769b13202d5db25c: 2023-07-19 05:14:53,477 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,aaaaa,1689743689823.6790c8e1d988dd7656ea21c548741328. 2023-07-19 05:14:53,477 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 6790c8e1d988dd7656ea21c548741328: 2023-07-19 05:14:53,479 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 246041b22a4c0c68769b13202d5db25c 2023-07-19 05:14:53,479 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 724483523107c4b407be49a4be4a8c59 2023-07-19 05:14:53,480 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 724483523107c4b407be49a4be4a8c59, disabling compactions & flushes 2023-07-19 05:14:53,481 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689743689823.724483523107c4b407be49a4be4a8c59. 2023-07-19 05:14:53,481 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689743689823.724483523107c4b407be49a4be4a8c59. 2023-07-19 05:14:53,481 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689743689823.724483523107c4b407be49a4be4a8c59. after waiting 0 ms 2023-07-19 05:14:53,481 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689743689823.724483523107c4b407be49a4be4a8c59. 2023-07-19 05:14:53,482 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=50 updating hbase:meta row=246041b22a4c0c68769b13202d5db25c, regionState=CLOSED 2023-07-19 05:14:53,482 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689743689823.246041b22a4c0c68769b13202d5db25c.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689743693481"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689743693481"}]},"ts":"1689743693481"} 2023-07-19 05:14:53,482 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 6790c8e1d988dd7656ea21c548741328 2023-07-19 05:14:53,483 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=49 updating hbase:meta row=6790c8e1d988dd7656ea21c548741328, regionState=CLOSED 2023-07-19 05:14:53,483 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689743689823.6790c8e1d988dd7656ea21c548741328.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689743693483"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689743693483"}]},"ts":"1689743693483"} 2023-07-19 05:14:53,489 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=56, resume processing ppid=50 2023-07-19 05:14:53,489 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=56, ppid=50, state=SUCCESS; CloseRegionProcedure 246041b22a4c0c68769b13202d5db25c, server=jenkins-hbase4.apache.org,41899,1689743683228 in 202 msec 2023-07-19 05:14:53,491 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=54, resume processing ppid=49 2023-07-19 05:14:53,491 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=50, ppid=47, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=246041b22a4c0c68769b13202d5db25c, UNASSIGN in 227 msec 2023-07-19 05:14:53,491 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=54, ppid=49, state=SUCCESS; CloseRegionProcedure 6790c8e1d988dd7656ea21c548741328, server=jenkins-hbase4.apache.org,41979,1689743683435 in 206 msec 2023-07-19 05:14:53,493 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=49, ppid=47, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=6790c8e1d988dd7656ea21c548741328, UNASSIGN in 229 msec 2023-07-19 05:14:53,495 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/data/default/Group_testTableMoveTruncateAndDrop/724483523107c4b407be49a4be4a8c59/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-19 05:14:53,496 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689743689823.724483523107c4b407be49a4be4a8c59. 2023-07-19 05:14:53,496 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 724483523107c4b407be49a4be4a8c59: 2023-07-19 05:14:53,498 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 724483523107c4b407be49a4be4a8c59 2023-07-19 05:14:53,499 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=51 updating hbase:meta row=724483523107c4b407be49a4be4a8c59, regionState=CLOSED 2023-07-19 05:14:53,500 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689743689823.724483523107c4b407be49a4be4a8c59.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689743693499"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689743693499"}]},"ts":"1689743693499"} 2023-07-19 05:14:53,506 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=55, resume processing ppid=51 2023-07-19 05:14:53,506 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=55, ppid=51, state=SUCCESS; CloseRegionProcedure 724483523107c4b407be49a4be4a8c59, server=jenkins-hbase4.apache.org,41899,1689743683228 in 221 msec 2023-07-19 05:14:53,511 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=51, resume processing ppid=47 2023-07-19 05:14:53,511 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=51, ppid=47, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=724483523107c4b407be49a4be4a8c59, UNASSIGN in 244 msec 2023-07-19 05:14:53,512 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689743693512"}]},"ts":"1689743693512"} 2023-07-19 05:14:53,514 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=DISABLED in hbase:meta 2023-07-19 05:14:53,516 INFO [PEWorker-4] procedure.DisableTableProcedure(305): Set Group_testTableMoveTruncateAndDrop to state=DISABLED 2023-07-19 05:14:53,518 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=47, state=SUCCESS; DisableTableProcedure table=Group_testTableMoveTruncateAndDrop in 277 msec 2023-07-19 05:14:53,562 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.MasterRpcServices(1230): Checking to see if procedure is done pid=47 2023-07-19 05:14:53,562 INFO [Listener at localhost/38799] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:Group_testTableMoveTruncateAndDrop, procId: 47 completed 2023-07-19 05:14:53,564 INFO [Listener at localhost/38799] client.HBaseAdmin$13(770): Started truncating Group_testTableMoveTruncateAndDrop 2023-07-19 05:14:53,569 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.HMaster$6(2260): Client=jenkins//172.31.14.131 truncate Group_testTableMoveTruncateAndDrop 2023-07-19 05:14:53,578 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] procedure2.ProcedureExecutor(1029): Stored pid=58, state=RUNNABLE:TRUNCATE_TABLE_PRE_OPERATION; TruncateTableProcedure (table=Group_testTableMoveTruncateAndDrop preserveSplits=true) 2023-07-19 05:14:53,581 DEBUG [PEWorker-3] procedure.TruncateTableProcedure(87): waiting for 'Group_testTableMoveTruncateAndDrop' regions in transition 2023-07-19 05:14:53,582 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.MasterRpcServices(1230): Checking to see if procedure is done pid=58 2023-07-19 05:14:53,595 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/.tmp/data/default/Group_testTableMoveTruncateAndDrop/6790c8e1d988dd7656ea21c548741328 2023-07-19 05:14:53,595 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/.tmp/data/default/Group_testTableMoveTruncateAndDrop/724483523107c4b407be49a4be4a8c59 2023-07-19 05:14:53,595 DEBUG [HFileArchiver-8] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/.tmp/data/default/Group_testTableMoveTruncateAndDrop/329b1dced0c6dac68200f800600fec43 2023-07-19 05:14:53,595 DEBUG [HFileArchiver-5] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/.tmp/data/default/Group_testTableMoveTruncateAndDrop/579d97cc3415548daa398aa865f7d496 2023-07-19 05:14:53,595 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/.tmp/data/default/Group_testTableMoveTruncateAndDrop/246041b22a4c0c68769b13202d5db25c 2023-07-19 05:14:53,600 DEBUG [HFileArchiver-5] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/.tmp/data/default/Group_testTableMoveTruncateAndDrop/579d97cc3415548daa398aa865f7d496/f, FileablePath, hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/.tmp/data/default/Group_testTableMoveTruncateAndDrop/579d97cc3415548daa398aa865f7d496/recovered.edits] 2023-07-19 05:14:53,600 DEBUG [HFileArchiver-1] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/.tmp/data/default/Group_testTableMoveTruncateAndDrop/6790c8e1d988dd7656ea21c548741328/f, FileablePath, hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/.tmp/data/default/Group_testTableMoveTruncateAndDrop/6790c8e1d988dd7656ea21c548741328/recovered.edits] 2023-07-19 05:14:53,600 DEBUG [HFileArchiver-2] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/.tmp/data/default/Group_testTableMoveTruncateAndDrop/246041b22a4c0c68769b13202d5db25c/f, FileablePath, hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/.tmp/data/default/Group_testTableMoveTruncateAndDrop/246041b22a4c0c68769b13202d5db25c/recovered.edits] 2023-07-19 05:14:53,601 DEBUG [HFileArchiver-3] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/.tmp/data/default/Group_testTableMoveTruncateAndDrop/724483523107c4b407be49a4be4a8c59/f, FileablePath, hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/.tmp/data/default/Group_testTableMoveTruncateAndDrop/724483523107c4b407be49a4be4a8c59/recovered.edits] 2023-07-19 05:14:53,604 DEBUG [HFileArchiver-8] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/.tmp/data/default/Group_testTableMoveTruncateAndDrop/329b1dced0c6dac68200f800600fec43/f, FileablePath, hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/.tmp/data/default/Group_testTableMoveTruncateAndDrop/329b1dced0c6dac68200f800600fec43/recovered.edits] 2023-07-19 05:14:53,619 DEBUG [HFileArchiver-5] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/.tmp/data/default/Group_testTableMoveTruncateAndDrop/579d97cc3415548daa398aa865f7d496/recovered.edits/7.seqid to hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/archive/data/default/Group_testTableMoveTruncateAndDrop/579d97cc3415548daa398aa865f7d496/recovered.edits/7.seqid 2023-07-19 05:14:53,619 DEBUG [HFileArchiver-2] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/.tmp/data/default/Group_testTableMoveTruncateAndDrop/246041b22a4c0c68769b13202d5db25c/recovered.edits/7.seqid to hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/archive/data/default/Group_testTableMoveTruncateAndDrop/246041b22a4c0c68769b13202d5db25c/recovered.edits/7.seqid 2023-07-19 05:14:53,619 DEBUG [HFileArchiver-1] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/.tmp/data/default/Group_testTableMoveTruncateAndDrop/6790c8e1d988dd7656ea21c548741328/recovered.edits/7.seqid to hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/archive/data/default/Group_testTableMoveTruncateAndDrop/6790c8e1d988dd7656ea21c548741328/recovered.edits/7.seqid 2023-07-19 05:14:53,620 DEBUG [HFileArchiver-3] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/.tmp/data/default/Group_testTableMoveTruncateAndDrop/724483523107c4b407be49a4be4a8c59/recovered.edits/7.seqid to hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/archive/data/default/Group_testTableMoveTruncateAndDrop/724483523107c4b407be49a4be4a8c59/recovered.edits/7.seqid 2023-07-19 05:14:53,621 DEBUG [HFileArchiver-5] backup.HFileArchiver(596): Deleted hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/.tmp/data/default/Group_testTableMoveTruncateAndDrop/579d97cc3415548daa398aa865f7d496 2023-07-19 05:14:53,621 DEBUG [HFileArchiver-1] backup.HFileArchiver(596): Deleted hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/.tmp/data/default/Group_testTableMoveTruncateAndDrop/6790c8e1d988dd7656ea21c548741328 2023-07-19 05:14:53,621 DEBUG [HFileArchiver-2] backup.HFileArchiver(596): Deleted hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/.tmp/data/default/Group_testTableMoveTruncateAndDrop/246041b22a4c0c68769b13202d5db25c 2023-07-19 05:14:53,621 DEBUG [HFileArchiver-3] backup.HFileArchiver(596): Deleted hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/.tmp/data/default/Group_testTableMoveTruncateAndDrop/724483523107c4b407be49a4be4a8c59 2023-07-19 05:14:53,622 DEBUG [HFileArchiver-8] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/.tmp/data/default/Group_testTableMoveTruncateAndDrop/329b1dced0c6dac68200f800600fec43/recovered.edits/7.seqid to hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/archive/data/default/Group_testTableMoveTruncateAndDrop/329b1dced0c6dac68200f800600fec43/recovered.edits/7.seqid 2023-07-19 05:14:53,622 DEBUG [HFileArchiver-8] backup.HFileArchiver(596): Deleted hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/.tmp/data/default/Group_testTableMoveTruncateAndDrop/329b1dced0c6dac68200f800600fec43 2023-07-19 05:14:53,622 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived Group_testTableMoveTruncateAndDrop regions 2023-07-19 05:14:53,654 WARN [PEWorker-3] procedure.DeleteTableProcedure(384): Deleting some vestigial 5 rows of Group_testTableMoveTruncateAndDrop from hbase:meta 2023-07-19 05:14:53,658 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(421): Removing 'Group_testTableMoveTruncateAndDrop' descriptor. 2023-07-19 05:14:53,659 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(411): Removing 'Group_testTableMoveTruncateAndDrop' from region states. 2023-07-19 05:14:53,659 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,,1689743689823.329b1dced0c6dac68200f800600fec43.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689743693659"}]},"ts":"9223372036854775807"} 2023-07-19 05:14:53,659 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689743689823.6790c8e1d988dd7656ea21c548741328.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689743693659"}]},"ts":"9223372036854775807"} 2023-07-19 05:14:53,659 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689743689823.246041b22a4c0c68769b13202d5db25c.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689743693659"}]},"ts":"9223372036854775807"} 2023-07-19 05:14:53,659 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689743689823.724483523107c4b407be49a4be4a8c59.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689743693659"}]},"ts":"9223372036854775807"} 2023-07-19 05:14:53,659 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689743689823.579d97cc3415548daa398aa865f7d496.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689743693659"}]},"ts":"9223372036854775807"} 2023-07-19 05:14:53,662 INFO [PEWorker-3] hbase.MetaTableAccessor(1788): Deleted 5 regions from META 2023-07-19 05:14:53,662 DEBUG [PEWorker-3] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => 329b1dced0c6dac68200f800600fec43, NAME => 'Group_testTableMoveTruncateAndDrop,,1689743689823.329b1dced0c6dac68200f800600fec43.', STARTKEY => '', ENDKEY => 'aaaaa'}, {ENCODED => 6790c8e1d988dd7656ea21c548741328, NAME => 'Group_testTableMoveTruncateAndDrop,aaaaa,1689743689823.6790c8e1d988dd7656ea21c548741328.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'}, {ENCODED => 246041b22a4c0c68769b13202d5db25c, NAME => 'Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689743689823.246041b22a4c0c68769b13202d5db25c.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'}, {ENCODED => 724483523107c4b407be49a4be4a8c59, NAME => 'Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689743689823.724483523107c4b407be49a4be4a8c59.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'}, {ENCODED => 579d97cc3415548daa398aa865f7d496, NAME => 'Group_testTableMoveTruncateAndDrop,zzzzz,1689743689823.579d97cc3415548daa398aa865f7d496.', STARTKEY => 'zzzzz', ENDKEY => ''}] 2023-07-19 05:14:53,662 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(415): Marking 'Group_testTableMoveTruncateAndDrop' as deleted. 2023-07-19 05:14:53,662 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1689743693662"}]},"ts":"9223372036854775807"} 2023-07-19 05:14:53,665 INFO [PEWorker-3] hbase.MetaTableAccessor(1658): Deleted table Group_testTableMoveTruncateAndDrop state from META 2023-07-19 05:14:53,681 DEBUG [HFileArchiver-6] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/.tmp/data/default/Group_testTableMoveTruncateAndDrop/6d8b7852ef4980a4c7cc34d99246dda0 2023-07-19 05:14:53,681 DEBUG [HFileArchiver-5] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/.tmp/data/default/Group_testTableMoveTruncateAndDrop/17d63eb81f76d3255005aee63b4da396 2023-07-19 05:14:53,681 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/.tmp/data/default/Group_testTableMoveTruncateAndDrop/f4195847c04bfb9472308f2ce6493ef5 2023-07-19 05:14:53,681 DEBUG [HFileArchiver-7] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/.tmp/data/default/Group_testTableMoveTruncateAndDrop/db6251b838fdff2f42d347dbbd6e9f1e 2023-07-19 05:14:53,681 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/.tmp/data/default/Group_testTableMoveTruncateAndDrop/e72386175ab3d5172354fe9cee7cced9 2023-07-19 05:14:53,682 DEBUG [HFileArchiver-6] backup.HFileArchiver(153): Directory hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/.tmp/data/default/Group_testTableMoveTruncateAndDrop/6d8b7852ef4980a4c7cc34d99246dda0 empty. 2023-07-19 05:14:53,682 DEBUG [HFileArchiver-4] backup.HFileArchiver(153): Directory hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/.tmp/data/default/Group_testTableMoveTruncateAndDrop/f4195847c04bfb9472308f2ce6493ef5 empty. 2023-07-19 05:14:53,682 DEBUG [HFileArchiver-5] backup.HFileArchiver(153): Directory hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/.tmp/data/default/Group_testTableMoveTruncateAndDrop/17d63eb81f76d3255005aee63b4da396 empty. 2023-07-19 05:14:53,682 DEBUG [HFileArchiver-1] backup.HFileArchiver(153): Directory hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/.tmp/data/default/Group_testTableMoveTruncateAndDrop/e72386175ab3d5172354fe9cee7cced9 empty. 2023-07-19 05:14:53,683 DEBUG [HFileArchiver-7] backup.HFileArchiver(153): Directory hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/.tmp/data/default/Group_testTableMoveTruncateAndDrop/db6251b838fdff2f42d347dbbd6e9f1e empty. 2023-07-19 05:14:53,683 DEBUG [HFileArchiver-4] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/.tmp/data/default/Group_testTableMoveTruncateAndDrop/f4195847c04bfb9472308f2ce6493ef5 2023-07-19 05:14:53,683 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.MasterRpcServices(1230): Checking to see if procedure is done pid=58 2023-07-19 05:14:53,684 DEBUG [HFileArchiver-5] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/.tmp/data/default/Group_testTableMoveTruncateAndDrop/17d63eb81f76d3255005aee63b4da396 2023-07-19 05:14:53,684 DEBUG [HFileArchiver-1] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/.tmp/data/default/Group_testTableMoveTruncateAndDrop/e72386175ab3d5172354fe9cee7cced9 2023-07-19 05:14:53,684 DEBUG [HFileArchiver-6] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/.tmp/data/default/Group_testTableMoveTruncateAndDrop/6d8b7852ef4980a4c7cc34d99246dda0 2023-07-19 05:14:53,684 DEBUG [HFileArchiver-7] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/.tmp/data/default/Group_testTableMoveTruncateAndDrop/db6251b838fdff2f42d347dbbd6e9f1e 2023-07-19 05:14:53,685 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived Group_testTableMoveTruncateAndDrop regions 2023-07-19 05:14:53,706 DEBUG [PEWorker-3] util.FSTableDescriptors(570): Wrote into hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/.tmp/data/default/Group_testTableMoveTruncateAndDrop/.tabledesc/.tableinfo.0000000001 2023-07-19 05:14:53,707 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(7675): creating {ENCODED => f4195847c04bfb9472308f2ce6493ef5, NAME => 'Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689743693625.f4195847c04bfb9472308f2ce6493ef5.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/.tmp 2023-07-19 05:14:53,708 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(7675): creating {ENCODED => 6d8b7852ef4980a4c7cc34d99246dda0, NAME => 'Group_testTableMoveTruncateAndDrop,,1689743693625.6d8b7852ef4980a4c7cc34d99246dda0.', STARTKEY => '', ENDKEY => 'aaaaa'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/.tmp 2023-07-19 05:14:53,708 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(7675): creating {ENCODED => db6251b838fdff2f42d347dbbd6e9f1e, NAME => 'Group_testTableMoveTruncateAndDrop,aaaaa,1689743693625.db6251b838fdff2f42d347dbbd6e9f1e.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/.tmp 2023-07-19 05:14:53,749 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,,1689743693625.6d8b7852ef4980a4c7cc34d99246dda0.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-19 05:14:53,749 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1604): Closing 6d8b7852ef4980a4c7cc34d99246dda0, disabling compactions & flushes 2023-07-19 05:14:53,749 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,,1689743693625.6d8b7852ef4980a4c7cc34d99246dda0. 2023-07-19 05:14:53,749 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,,1689743693625.6d8b7852ef4980a4c7cc34d99246dda0. 2023-07-19 05:14:53,749 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,,1689743693625.6d8b7852ef4980a4c7cc34d99246dda0. after waiting 0 ms 2023-07-19 05:14:53,749 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,,1689743693625.6d8b7852ef4980a4c7cc34d99246dda0. 2023-07-19 05:14:53,749 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,,1689743693625.6d8b7852ef4980a4c7cc34d99246dda0. 2023-07-19 05:14:53,749 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1558): Region close journal for 6d8b7852ef4980a4c7cc34d99246dda0: 2023-07-19 05:14:53,750 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(7675): creating {ENCODED => 17d63eb81f76d3255005aee63b4da396, NAME => 'Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689743693625.17d63eb81f76d3255005aee63b4da396.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/.tmp 2023-07-19 05:14:53,750 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,aaaaa,1689743693625.db6251b838fdff2f42d347dbbd6e9f1e.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-19 05:14:53,751 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1604): Closing db6251b838fdff2f42d347dbbd6e9f1e, disabling compactions & flushes 2023-07-19 05:14:53,751 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,aaaaa,1689743693625.db6251b838fdff2f42d347dbbd6e9f1e. 2023-07-19 05:14:53,751 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689743693625.db6251b838fdff2f42d347dbbd6e9f1e. 2023-07-19 05:14:53,751 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689743693625.db6251b838fdff2f42d347dbbd6e9f1e. after waiting 0 ms 2023-07-19 05:14:53,751 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,aaaaa,1689743693625.db6251b838fdff2f42d347dbbd6e9f1e. 2023-07-19 05:14:53,751 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,aaaaa,1689743693625.db6251b838fdff2f42d347dbbd6e9f1e. 2023-07-19 05:14:53,751 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1558): Region close journal for db6251b838fdff2f42d347dbbd6e9f1e: 2023-07-19 05:14:53,752 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(7675): creating {ENCODED => e72386175ab3d5172354fe9cee7cced9, NAME => 'Group_testTableMoveTruncateAndDrop,zzzzz,1689743693625.e72386175ab3d5172354fe9cee7cced9.', STARTKEY => 'zzzzz', ENDKEY => ''}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/.tmp 2023-07-19 05:14:53,775 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689743693625.17d63eb81f76d3255005aee63b4da396.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-19 05:14:53,775 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1604): Closing 17d63eb81f76d3255005aee63b4da396, disabling compactions & flushes 2023-07-19 05:14:53,775 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689743693625.17d63eb81f76d3255005aee63b4da396. 2023-07-19 05:14:53,775 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689743693625.17d63eb81f76d3255005aee63b4da396. 2023-07-19 05:14:53,775 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689743693625.17d63eb81f76d3255005aee63b4da396. after waiting 0 ms 2023-07-19 05:14:53,775 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689743693625.17d63eb81f76d3255005aee63b4da396. 2023-07-19 05:14:53,775 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689743693625.17d63eb81f76d3255005aee63b4da396. 2023-07-19 05:14:53,775 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1558): Region close journal for 17d63eb81f76d3255005aee63b4da396: 2023-07-19 05:14:53,778 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,zzzzz,1689743693625.e72386175ab3d5172354fe9cee7cced9.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-19 05:14:53,778 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1604): Closing e72386175ab3d5172354fe9cee7cced9, disabling compactions & flushes 2023-07-19 05:14:53,778 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,zzzzz,1689743693625.e72386175ab3d5172354fe9cee7cced9. 2023-07-19 05:14:53,779 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689743693625.e72386175ab3d5172354fe9cee7cced9. 2023-07-19 05:14:53,779 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689743693625.e72386175ab3d5172354fe9cee7cced9. after waiting 0 ms 2023-07-19 05:14:53,779 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,zzzzz,1689743693625.e72386175ab3d5172354fe9cee7cced9. 2023-07-19 05:14:53,779 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,zzzzz,1689743693625.e72386175ab3d5172354fe9cee7cced9. 2023-07-19 05:14:53,779 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1558): Region close journal for e72386175ab3d5172354fe9cee7cced9: 2023-07-19 05:14:53,885 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.MasterRpcServices(1230): Checking to see if procedure is done pid=58 2023-07-19 05:14:54,148 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689743693625.f4195847c04bfb9472308f2ce6493ef5.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-19 05:14:54,148 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1604): Closing f4195847c04bfb9472308f2ce6493ef5, disabling compactions & flushes 2023-07-19 05:14:54,148 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689743693625.f4195847c04bfb9472308f2ce6493ef5. 2023-07-19 05:14:54,148 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689743693625.f4195847c04bfb9472308f2ce6493ef5. 2023-07-19 05:14:54,148 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689743693625.f4195847c04bfb9472308f2ce6493ef5. after waiting 0 ms 2023-07-19 05:14:54,148 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689743693625.f4195847c04bfb9472308f2ce6493ef5. 2023-07-19 05:14:54,148 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689743693625.f4195847c04bfb9472308f2ce6493ef5. 2023-07-19 05:14:54,148 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1558): Region close journal for f4195847c04bfb9472308f2ce6493ef5: 2023-07-19 05:14:54,153 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,,1689743693625.6d8b7852ef4980a4c7cc34d99246dda0.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689743694153"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689743694153"}]},"ts":"1689743694153"} 2023-07-19 05:14:54,153 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689743693625.db6251b838fdff2f42d347dbbd6e9f1e.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689743694153"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689743694153"}]},"ts":"1689743694153"} 2023-07-19 05:14:54,153 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689743693625.17d63eb81f76d3255005aee63b4da396.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689743694153"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689743694153"}]},"ts":"1689743694153"} 2023-07-19 05:14:54,153 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689743693625.e72386175ab3d5172354fe9cee7cced9.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689743694153"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689743694153"}]},"ts":"1689743694153"} 2023-07-19 05:14:54,153 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689743693625.f4195847c04bfb9472308f2ce6493ef5.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689743694153"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689743694153"}]},"ts":"1689743694153"} 2023-07-19 05:14:54,160 INFO [PEWorker-3] hbase.MetaTableAccessor(1496): Added 5 regions to meta. 2023-07-19 05:14:54,161 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689743694161"}]},"ts":"1689743694161"} 2023-07-19 05:14:54,163 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=ENABLING in hbase:meta 2023-07-19 05:14:54,168 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-19 05:14:54,168 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-19 05:14:54,168 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-19 05:14:54,168 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-19 05:14:54,169 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=59, ppid=58, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=6d8b7852ef4980a4c7cc34d99246dda0, ASSIGN}, {pid=60, ppid=58, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=db6251b838fdff2f42d347dbbd6e9f1e, ASSIGN}, {pid=61, ppid=58, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=f4195847c04bfb9472308f2ce6493ef5, ASSIGN}, {pid=62, ppid=58, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=17d63eb81f76d3255005aee63b4da396, ASSIGN}, {pid=63, ppid=58, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=e72386175ab3d5172354fe9cee7cced9, ASSIGN}] 2023-07-19 05:14:54,171 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=59, ppid=58, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=6d8b7852ef4980a4c7cc34d99246dda0, ASSIGN 2023-07-19 05:14:54,171 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=60, ppid=58, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=db6251b838fdff2f42d347dbbd6e9f1e, ASSIGN 2023-07-19 05:14:54,172 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=61, ppid=58, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=f4195847c04bfb9472308f2ce6493ef5, ASSIGN 2023-07-19 05:14:54,172 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=63, ppid=58, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=e72386175ab3d5172354fe9cee7cced9, ASSIGN 2023-07-19 05:14:54,172 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=62, ppid=58, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=17d63eb81f76d3255005aee63b4da396, ASSIGN 2023-07-19 05:14:54,173 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=59, ppid=58, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=6d8b7852ef4980a4c7cc34d99246dda0, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,41899,1689743683228; forceNewPlan=false, retain=false 2023-07-19 05:14:54,173 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=62, ppid=58, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=17d63eb81f76d3255005aee63b4da396, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,41899,1689743683228; forceNewPlan=false, retain=false 2023-07-19 05:14:54,173 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=63, ppid=58, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=e72386175ab3d5172354fe9cee7cced9, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,41979,1689743683435; forceNewPlan=false, retain=false 2023-07-19 05:14:54,173 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=61, ppid=58, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=f4195847c04bfb9472308f2ce6493ef5, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,41899,1689743683228; forceNewPlan=false, retain=false 2023-07-19 05:14:54,173 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=60, ppid=58, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=db6251b838fdff2f42d347dbbd6e9f1e, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,41979,1689743683435; forceNewPlan=false, retain=false 2023-07-19 05:14:54,187 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.MasterRpcServices(1230): Checking to see if procedure is done pid=58 2023-07-19 05:14:54,323 INFO [jenkins-hbase4:35853] balancer.BaseLoadBalancer(1545): Reassigned 5 regions. 5 retained the pre-restart assignment. 2023-07-19 05:14:54,327 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=62 updating hbase:meta row=17d63eb81f76d3255005aee63b4da396, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,41899,1689743683228 2023-07-19 05:14:54,327 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=59 updating hbase:meta row=6d8b7852ef4980a4c7cc34d99246dda0, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,41899,1689743683228 2023-07-19 05:14:54,327 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=63 updating hbase:meta row=e72386175ab3d5172354fe9cee7cced9, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,41979,1689743683435 2023-07-19 05:14:54,327 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=61 updating hbase:meta row=f4195847c04bfb9472308f2ce6493ef5, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,41899,1689743683228 2023-07-19 05:14:54,327 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689743693625.e72386175ab3d5172354fe9cee7cced9.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689743694327"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689743694327"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689743694327"}]},"ts":"1689743694327"} 2023-07-19 05:14:54,327 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=60 updating hbase:meta row=db6251b838fdff2f42d347dbbd6e9f1e, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,41979,1689743683435 2023-07-19 05:14:54,327 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689743693625.f4195847c04bfb9472308f2ce6493ef5.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689743694327"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689743694327"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689743694327"}]},"ts":"1689743694327"} 2023-07-19 05:14:54,327 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,,1689743693625.6d8b7852ef4980a4c7cc34d99246dda0.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689743694327"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689743694327"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689743694327"}]},"ts":"1689743694327"} 2023-07-19 05:14:54,327 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689743693625.17d63eb81f76d3255005aee63b4da396.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689743694327"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689743694327"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689743694327"}]},"ts":"1689743694327"} 2023-07-19 05:14:54,327 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689743693625.db6251b838fdff2f42d347dbbd6e9f1e.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689743694327"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689743694327"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689743694327"}]},"ts":"1689743694327"} 2023-07-19 05:14:54,329 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=64, ppid=63, state=RUNNABLE; OpenRegionProcedure e72386175ab3d5172354fe9cee7cced9, server=jenkins-hbase4.apache.org,41979,1689743683435}] 2023-07-19 05:14:54,330 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=65, ppid=61, state=RUNNABLE; OpenRegionProcedure f4195847c04bfb9472308f2ce6493ef5, server=jenkins-hbase4.apache.org,41899,1689743683228}] 2023-07-19 05:14:54,331 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=66, ppid=59, state=RUNNABLE; OpenRegionProcedure 6d8b7852ef4980a4c7cc34d99246dda0, server=jenkins-hbase4.apache.org,41899,1689743683228}] 2023-07-19 05:14:54,335 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=67, ppid=62, state=RUNNABLE; OpenRegionProcedure 17d63eb81f76d3255005aee63b4da396, server=jenkins-hbase4.apache.org,41899,1689743683228}] 2023-07-19 05:14:54,341 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=68, ppid=60, state=RUNNABLE; OpenRegionProcedure db6251b838fdff2f42d347dbbd6e9f1e, server=jenkins-hbase4.apache.org,41979,1689743683435}] 2023-07-19 05:14:54,494 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689743693625.f4195847c04bfb9472308f2ce6493ef5. 2023-07-19 05:14:54,495 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => f4195847c04bfb9472308f2ce6493ef5, NAME => 'Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689743693625.f4195847c04bfb9472308f2ce6493ef5.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'} 2023-07-19 05:14:54,495 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop f4195847c04bfb9472308f2ce6493ef5 2023-07-19 05:14:54,495 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689743693625.f4195847c04bfb9472308f2ce6493ef5.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-19 05:14:54,495 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for f4195847c04bfb9472308f2ce6493ef5 2023-07-19 05:14:54,495 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for f4195847c04bfb9472308f2ce6493ef5 2023-07-19 05:14:54,499 INFO [StoreOpener-f4195847c04bfb9472308f2ce6493ef5-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region f4195847c04bfb9472308f2ce6493ef5 2023-07-19 05:14:54,502 DEBUG [StoreOpener-f4195847c04bfb9472308f2ce6493ef5-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/data/default/Group_testTableMoveTruncateAndDrop/f4195847c04bfb9472308f2ce6493ef5/f 2023-07-19 05:14:54,502 DEBUG [StoreOpener-f4195847c04bfb9472308f2ce6493ef5-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/data/default/Group_testTableMoveTruncateAndDrop/f4195847c04bfb9472308f2ce6493ef5/f 2023-07-19 05:14:54,503 INFO [StoreOpener-f4195847c04bfb9472308f2ce6493ef5-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region f4195847c04bfb9472308f2ce6493ef5 columnFamilyName f 2023-07-19 05:14:54,503 INFO [StoreOpener-f4195847c04bfb9472308f2ce6493ef5-1] regionserver.HStore(310): Store=f4195847c04bfb9472308f2ce6493ef5/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-19 05:14:54,504 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/data/default/Group_testTableMoveTruncateAndDrop/f4195847c04bfb9472308f2ce6493ef5 2023-07-19 05:14:54,505 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/data/default/Group_testTableMoveTruncateAndDrop/f4195847c04bfb9472308f2ce6493ef5 2023-07-19 05:14:54,506 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,aaaaa,1689743693625.db6251b838fdff2f42d347dbbd6e9f1e. 2023-07-19 05:14:54,506 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => db6251b838fdff2f42d347dbbd6e9f1e, NAME => 'Group_testTableMoveTruncateAndDrop,aaaaa,1689743693625.db6251b838fdff2f42d347dbbd6e9f1e.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'} 2023-07-19 05:14:54,507 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop db6251b838fdff2f42d347dbbd6e9f1e 2023-07-19 05:14:54,507 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,aaaaa,1689743693625.db6251b838fdff2f42d347dbbd6e9f1e.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-19 05:14:54,507 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for db6251b838fdff2f42d347dbbd6e9f1e 2023-07-19 05:14:54,507 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for db6251b838fdff2f42d347dbbd6e9f1e 2023-07-19 05:14:54,509 INFO [StoreOpener-db6251b838fdff2f42d347dbbd6e9f1e-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region db6251b838fdff2f42d347dbbd6e9f1e 2023-07-19 05:14:54,509 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for f4195847c04bfb9472308f2ce6493ef5 2023-07-19 05:14:54,511 DEBUG [StoreOpener-db6251b838fdff2f42d347dbbd6e9f1e-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/data/default/Group_testTableMoveTruncateAndDrop/db6251b838fdff2f42d347dbbd6e9f1e/f 2023-07-19 05:14:54,511 DEBUG [StoreOpener-db6251b838fdff2f42d347dbbd6e9f1e-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/data/default/Group_testTableMoveTruncateAndDrop/db6251b838fdff2f42d347dbbd6e9f1e/f 2023-07-19 05:14:54,512 INFO [StoreOpener-db6251b838fdff2f42d347dbbd6e9f1e-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region db6251b838fdff2f42d347dbbd6e9f1e columnFamilyName f 2023-07-19 05:14:54,513 INFO [StoreOpener-db6251b838fdff2f42d347dbbd6e9f1e-1] regionserver.HStore(310): Store=db6251b838fdff2f42d347dbbd6e9f1e/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-19 05:14:54,515 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/data/default/Group_testTableMoveTruncateAndDrop/db6251b838fdff2f42d347dbbd6e9f1e 2023-07-19 05:14:54,515 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/data/default/Group_testTableMoveTruncateAndDrop/f4195847c04bfb9472308f2ce6493ef5/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-19 05:14:54,516 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/data/default/Group_testTableMoveTruncateAndDrop/db6251b838fdff2f42d347dbbd6e9f1e 2023-07-19 05:14:54,516 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened f4195847c04bfb9472308f2ce6493ef5; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11354738400, jitterRate=0.057492420077323914}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-19 05:14:54,516 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for f4195847c04bfb9472308f2ce6493ef5: 2023-07-19 05:14:54,517 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689743693625.f4195847c04bfb9472308f2ce6493ef5., pid=65, masterSystemTime=1689743694489 2023-07-19 05:14:54,519 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689743693625.f4195847c04bfb9472308f2ce6493ef5. 2023-07-19 05:14:54,519 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689743693625.f4195847c04bfb9472308f2ce6493ef5. 2023-07-19 05:14:54,519 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689743693625.17d63eb81f76d3255005aee63b4da396. 2023-07-19 05:14:54,519 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 17d63eb81f76d3255005aee63b4da396, NAME => 'Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689743693625.17d63eb81f76d3255005aee63b4da396.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'} 2023-07-19 05:14:54,519 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 17d63eb81f76d3255005aee63b4da396 2023-07-19 05:14:54,519 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689743693625.17d63eb81f76d3255005aee63b4da396.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-19 05:14:54,519 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 17d63eb81f76d3255005aee63b4da396 2023-07-19 05:14:54,520 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 17d63eb81f76d3255005aee63b4da396 2023-07-19 05:14:54,520 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for db6251b838fdff2f42d347dbbd6e9f1e 2023-07-19 05:14:54,522 INFO [StoreOpener-17d63eb81f76d3255005aee63b4da396-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 17d63eb81f76d3255005aee63b4da396 2023-07-19 05:14:54,523 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/data/default/Group_testTableMoveTruncateAndDrop/db6251b838fdff2f42d347dbbd6e9f1e/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-19 05:14:54,524 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened db6251b838fdff2f42d347dbbd6e9f1e; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11394497120, jitterRate=0.061195239424705505}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-19 05:14:54,524 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for db6251b838fdff2f42d347dbbd6e9f1e: 2023-07-19 05:14:54,524 DEBUG [StoreOpener-17d63eb81f76d3255005aee63b4da396-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/data/default/Group_testTableMoveTruncateAndDrop/17d63eb81f76d3255005aee63b4da396/f 2023-07-19 05:14:54,524 DEBUG [StoreOpener-17d63eb81f76d3255005aee63b4da396-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/data/default/Group_testTableMoveTruncateAndDrop/17d63eb81f76d3255005aee63b4da396/f 2023-07-19 05:14:54,525 INFO [StoreOpener-17d63eb81f76d3255005aee63b4da396-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 17d63eb81f76d3255005aee63b4da396 columnFamilyName f 2023-07-19 05:14:54,525 INFO [StoreOpener-17d63eb81f76d3255005aee63b4da396-1] regionserver.HStore(310): Store=17d63eb81f76d3255005aee63b4da396/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-19 05:14:54,526 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=61 updating hbase:meta row=f4195847c04bfb9472308f2ce6493ef5, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,41899,1689743683228 2023-07-19 05:14:54,526 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689743693625.f4195847c04bfb9472308f2ce6493ef5.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689743694526"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689743694526"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689743694526"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689743694526"}]},"ts":"1689743694526"} 2023-07-19 05:14:54,526 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/data/default/Group_testTableMoveTruncateAndDrop/17d63eb81f76d3255005aee63b4da396 2023-07-19 05:14:54,527 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/data/default/Group_testTableMoveTruncateAndDrop/17d63eb81f76d3255005aee63b4da396 2023-07-19 05:14:54,529 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,aaaaa,1689743693625.db6251b838fdff2f42d347dbbd6e9f1e., pid=68, masterSystemTime=1689743694498 2023-07-19 05:14:54,532 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,aaaaa,1689743693625.db6251b838fdff2f42d347dbbd6e9f1e. 2023-07-19 05:14:54,532 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,aaaaa,1689743693625.db6251b838fdff2f42d347dbbd6e9f1e. 2023-07-19 05:14:54,532 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,zzzzz,1689743693625.e72386175ab3d5172354fe9cee7cced9. 2023-07-19 05:14:54,533 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => e72386175ab3d5172354fe9cee7cced9, NAME => 'Group_testTableMoveTruncateAndDrop,zzzzz,1689743693625.e72386175ab3d5172354fe9cee7cced9.', STARTKEY => 'zzzzz', ENDKEY => ''} 2023-07-19 05:14:54,533 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop e72386175ab3d5172354fe9cee7cced9 2023-07-19 05:14:54,533 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,zzzzz,1689743693625.e72386175ab3d5172354fe9cee7cced9.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-19 05:14:54,533 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=65, resume processing ppid=61 2023-07-19 05:14:54,533 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=60 updating hbase:meta row=db6251b838fdff2f42d347dbbd6e9f1e, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,41979,1689743683435 2023-07-19 05:14:54,533 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for e72386175ab3d5172354fe9cee7cced9 2023-07-19 05:14:54,533 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689743693625.db6251b838fdff2f42d347dbbd6e9f1e.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689743694533"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689743694533"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689743694533"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689743694533"}]},"ts":"1689743694533"} 2023-07-19 05:14:54,533 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=65, ppid=61, state=SUCCESS; OpenRegionProcedure f4195847c04bfb9472308f2ce6493ef5, server=jenkins-hbase4.apache.org,41899,1689743683228 in 199 msec 2023-07-19 05:14:54,534 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for e72386175ab3d5172354fe9cee7cced9 2023-07-19 05:14:54,535 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 17d63eb81f76d3255005aee63b4da396 2023-07-19 05:14:54,541 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=61, ppid=58, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=f4195847c04bfb9472308f2ce6493ef5, ASSIGN in 364 msec 2023-07-19 05:14:54,543 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=68, resume processing ppid=60 2023-07-19 05:14:54,543 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=68, ppid=60, state=SUCCESS; OpenRegionProcedure db6251b838fdff2f42d347dbbd6e9f1e, server=jenkins-hbase4.apache.org,41979,1689743683435 in 201 msec 2023-07-19 05:14:54,546 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=60, ppid=58, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=db6251b838fdff2f42d347dbbd6e9f1e, ASSIGN in 374 msec 2023-07-19 05:14:54,553 INFO [StoreOpener-e72386175ab3d5172354fe9cee7cced9-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region e72386175ab3d5172354fe9cee7cced9 2023-07-19 05:14:54,554 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/data/default/Group_testTableMoveTruncateAndDrop/17d63eb81f76d3255005aee63b4da396/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-19 05:14:54,555 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 17d63eb81f76d3255005aee63b4da396; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10799177760, jitterRate=0.005751803517341614}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-19 05:14:54,555 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 17d63eb81f76d3255005aee63b4da396: 2023-07-19 05:14:54,555 DEBUG [StoreOpener-e72386175ab3d5172354fe9cee7cced9-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/data/default/Group_testTableMoveTruncateAndDrop/e72386175ab3d5172354fe9cee7cced9/f 2023-07-19 05:14:54,556 DEBUG [StoreOpener-e72386175ab3d5172354fe9cee7cced9-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/data/default/Group_testTableMoveTruncateAndDrop/e72386175ab3d5172354fe9cee7cced9/f 2023-07-19 05:14:54,556 INFO [StoreOpener-e72386175ab3d5172354fe9cee7cced9-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region e72386175ab3d5172354fe9cee7cced9 columnFamilyName f 2023-07-19 05:14:54,557 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689743693625.17d63eb81f76d3255005aee63b4da396., pid=67, masterSystemTime=1689743694489 2023-07-19 05:14:54,557 INFO [StoreOpener-e72386175ab3d5172354fe9cee7cced9-1] regionserver.HStore(310): Store=e72386175ab3d5172354fe9cee7cced9/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-19 05:14:54,558 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/data/default/Group_testTableMoveTruncateAndDrop/e72386175ab3d5172354fe9cee7cced9 2023-07-19 05:14:54,559 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/data/default/Group_testTableMoveTruncateAndDrop/e72386175ab3d5172354fe9cee7cced9 2023-07-19 05:14:54,559 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689743693625.17d63eb81f76d3255005aee63b4da396. 2023-07-19 05:14:54,559 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689743693625.17d63eb81f76d3255005aee63b4da396. 2023-07-19 05:14:54,560 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,,1689743693625.6d8b7852ef4980a4c7cc34d99246dda0. 2023-07-19 05:14:54,560 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 6d8b7852ef4980a4c7cc34d99246dda0, NAME => 'Group_testTableMoveTruncateAndDrop,,1689743693625.6d8b7852ef4980a4c7cc34d99246dda0.', STARTKEY => '', ENDKEY => 'aaaaa'} 2023-07-19 05:14:54,560 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=62 updating hbase:meta row=17d63eb81f76d3255005aee63b4da396, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,41899,1689743683228 2023-07-19 05:14:54,560 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 6d8b7852ef4980a4c7cc34d99246dda0 2023-07-19 05:14:54,560 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689743693625.17d63eb81f76d3255005aee63b4da396.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689743694560"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689743694560"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689743694560"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689743694560"}]},"ts":"1689743694560"} 2023-07-19 05:14:54,560 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,,1689743693625.6d8b7852ef4980a4c7cc34d99246dda0.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-19 05:14:54,560 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 6d8b7852ef4980a4c7cc34d99246dda0 2023-07-19 05:14:54,560 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 6d8b7852ef4980a4c7cc34d99246dda0 2023-07-19 05:14:54,565 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for e72386175ab3d5172354fe9cee7cced9 2023-07-19 05:14:54,567 INFO [StoreOpener-6d8b7852ef4980a4c7cc34d99246dda0-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 6d8b7852ef4980a4c7cc34d99246dda0 2023-07-19 05:14:54,571 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/data/default/Group_testTableMoveTruncateAndDrop/e72386175ab3d5172354fe9cee7cced9/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-19 05:14:54,572 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened e72386175ab3d5172354fe9cee7cced9; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9800174880, jitterRate=-0.08728758990764618}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-19 05:14:54,572 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for e72386175ab3d5172354fe9cee7cced9: 2023-07-19 05:14:54,573 DEBUG [StoreOpener-6d8b7852ef4980a4c7cc34d99246dda0-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/data/default/Group_testTableMoveTruncateAndDrop/6d8b7852ef4980a4c7cc34d99246dda0/f 2023-07-19 05:14:54,573 DEBUG [StoreOpener-6d8b7852ef4980a4c7cc34d99246dda0-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/data/default/Group_testTableMoveTruncateAndDrop/6d8b7852ef4980a4c7cc34d99246dda0/f 2023-07-19 05:14:54,574 INFO [StoreOpener-6d8b7852ef4980a4c7cc34d99246dda0-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 6d8b7852ef4980a4c7cc34d99246dda0 columnFamilyName f 2023-07-19 05:14:54,574 INFO [StoreOpener-6d8b7852ef4980a4c7cc34d99246dda0-1] regionserver.HStore(310): Store=6d8b7852ef4980a4c7cc34d99246dda0/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-19 05:14:54,575 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,zzzzz,1689743693625.e72386175ab3d5172354fe9cee7cced9., pid=64, masterSystemTime=1689743694498 2023-07-19 05:14:54,576 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/data/default/Group_testTableMoveTruncateAndDrop/6d8b7852ef4980a4c7cc34d99246dda0 2023-07-19 05:14:54,576 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=67, resume processing ppid=62 2023-07-19 05:14:54,576 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=67, ppid=62, state=SUCCESS; OpenRegionProcedure 17d63eb81f76d3255005aee63b4da396, server=jenkins-hbase4.apache.org,41899,1689743683228 in 231 msec 2023-07-19 05:14:54,577 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/data/default/Group_testTableMoveTruncateAndDrop/6d8b7852ef4980a4c7cc34d99246dda0 2023-07-19 05:14:54,578 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,zzzzz,1689743693625.e72386175ab3d5172354fe9cee7cced9. 2023-07-19 05:14:54,578 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,zzzzz,1689743693625.e72386175ab3d5172354fe9cee7cced9. 2023-07-19 05:14:54,579 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=63 updating hbase:meta row=e72386175ab3d5172354fe9cee7cced9, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,41979,1689743683435 2023-07-19 05:14:54,579 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689743693625.e72386175ab3d5172354fe9cee7cced9.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689743694579"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689743694579"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689743694579"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689743694579"}]},"ts":"1689743694579"} 2023-07-19 05:14:54,581 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=62, ppid=58, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=17d63eb81f76d3255005aee63b4da396, ASSIGN in 408 msec 2023-07-19 05:14:54,581 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 6d8b7852ef4980a4c7cc34d99246dda0 2023-07-19 05:14:54,586 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/data/default/Group_testTableMoveTruncateAndDrop/6d8b7852ef4980a4c7cc34d99246dda0/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-19 05:14:54,586 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 6d8b7852ef4980a4c7cc34d99246dda0; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9982969280, jitterRate=-0.07026353478431702}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-19 05:14:54,586 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 6d8b7852ef4980a4c7cc34d99246dda0: 2023-07-19 05:14:54,589 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,,1689743693625.6d8b7852ef4980a4c7cc34d99246dda0., pid=66, masterSystemTime=1689743694489 2023-07-19 05:14:54,596 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,,1689743693625.6d8b7852ef4980a4c7cc34d99246dda0. 2023-07-19 05:14:54,596 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,,1689743693625.6d8b7852ef4980a4c7cc34d99246dda0. 2023-07-19 05:14:54,598 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=59 updating hbase:meta row=6d8b7852ef4980a4c7cc34d99246dda0, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,41899,1689743683228 2023-07-19 05:14:54,598 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,,1689743693625.6d8b7852ef4980a4c7cc34d99246dda0.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689743694598"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689743694598"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689743694598"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689743694598"}]},"ts":"1689743694598"} 2023-07-19 05:14:54,598 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=64, resume processing ppid=63 2023-07-19 05:14:54,598 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=64, ppid=63, state=SUCCESS; OpenRegionProcedure e72386175ab3d5172354fe9cee7cced9, server=jenkins-hbase4.apache.org,41979,1689743683435 in 259 msec 2023-07-19 05:14:54,600 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=63, ppid=58, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=e72386175ab3d5172354fe9cee7cced9, ASSIGN in 429 msec 2023-07-19 05:14:54,602 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=66, resume processing ppid=59 2023-07-19 05:14:54,602 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=66, ppid=59, state=SUCCESS; OpenRegionProcedure 6d8b7852ef4980a4c7cc34d99246dda0, server=jenkins-hbase4.apache.org,41899,1689743683228 in 269 msec 2023-07-19 05:14:54,604 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=59, resume processing ppid=58 2023-07-19 05:14:54,604 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=59, ppid=58, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=6d8b7852ef4980a4c7cc34d99246dda0, ASSIGN in 433 msec 2023-07-19 05:14:54,605 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689743694605"}]},"ts":"1689743694605"} 2023-07-19 05:14:54,607 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=ENABLED in hbase:meta 2023-07-19 05:14:54,609 DEBUG [PEWorker-2] procedure.TruncateTableProcedure(145): truncate 'Group_testTableMoveTruncateAndDrop' completed 2023-07-19 05:14:54,610 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=58, state=SUCCESS; TruncateTableProcedure (table=Group_testTableMoveTruncateAndDrop preserveSplits=true) in 1.0380 sec 2023-07-19 05:14:54,688 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.MasterRpcServices(1230): Checking to see if procedure is done pid=58 2023-07-19 05:14:54,688 INFO [Listener at localhost/38799] client.HBaseAdmin$TableFuture(3541): Operation: TRUNCATE, Table Name: default:Group_testTableMoveTruncateAndDrop, procId: 58 completed 2023-07-19 05:14:54,689 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=Group_testTableMoveTruncateAndDrop_2021517430 2023-07-19 05:14:54,690 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-19 05:14:54,691 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=Group_testTableMoveTruncateAndDrop_2021517430 2023-07-19 05:14:54,691 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-19 05:14:54,692 INFO [Listener at localhost/38799] client.HBaseAdmin$15(890): Started disable of Group_testTableMoveTruncateAndDrop 2023-07-19 05:14:54,693 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable Group_testTableMoveTruncateAndDrop 2023-07-19 05:14:54,694 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] procedure2.ProcedureExecutor(1029): Stored pid=69, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-19 05:14:54,697 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.MasterRpcServices(1230): Checking to see if procedure is done pid=69 2023-07-19 05:14:54,698 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689743694698"}]},"ts":"1689743694698"} 2023-07-19 05:14:54,699 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=DISABLING in hbase:meta 2023-07-19 05:14:54,701 INFO [PEWorker-5] procedure.DisableTableProcedure(293): Set Group_testTableMoveTruncateAndDrop to state=DISABLING 2023-07-19 05:14:54,702 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=70, ppid=69, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=6d8b7852ef4980a4c7cc34d99246dda0, UNASSIGN}, {pid=71, ppid=69, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=db6251b838fdff2f42d347dbbd6e9f1e, UNASSIGN}, {pid=72, ppid=69, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=f4195847c04bfb9472308f2ce6493ef5, UNASSIGN}, {pid=73, ppid=69, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=17d63eb81f76d3255005aee63b4da396, UNASSIGN}, {pid=74, ppid=69, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=e72386175ab3d5172354fe9cee7cced9, UNASSIGN}] 2023-07-19 05:14:54,705 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=70, ppid=69, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=6d8b7852ef4980a4c7cc34d99246dda0, UNASSIGN 2023-07-19 05:14:54,705 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=71, ppid=69, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=db6251b838fdff2f42d347dbbd6e9f1e, UNASSIGN 2023-07-19 05:14:54,705 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=72, ppid=69, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=f4195847c04bfb9472308f2ce6493ef5, UNASSIGN 2023-07-19 05:14:54,705 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=73, ppid=69, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=17d63eb81f76d3255005aee63b4da396, UNASSIGN 2023-07-19 05:14:54,706 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=70 updating hbase:meta row=6d8b7852ef4980a4c7cc34d99246dda0, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,41899,1689743683228 2023-07-19 05:14:54,706 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=74, ppid=69, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=e72386175ab3d5172354fe9cee7cced9, UNASSIGN 2023-07-19 05:14:54,706 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,,1689743693625.6d8b7852ef4980a4c7cc34d99246dda0.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689743694706"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689743694706"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689743694706"}]},"ts":"1689743694706"} 2023-07-19 05:14:54,707 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=71 updating hbase:meta row=db6251b838fdff2f42d347dbbd6e9f1e, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,41979,1689743683435 2023-07-19 05:14:54,707 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=72 updating hbase:meta row=f4195847c04bfb9472308f2ce6493ef5, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,41899,1689743683228 2023-07-19 05:14:54,707 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689743693625.db6251b838fdff2f42d347dbbd6e9f1e.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689743694707"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689743694707"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689743694707"}]},"ts":"1689743694707"} 2023-07-19 05:14:54,707 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689743693625.f4195847c04bfb9472308f2ce6493ef5.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689743694707"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689743694707"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689743694707"}]},"ts":"1689743694707"} 2023-07-19 05:14:54,708 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=73 updating hbase:meta row=17d63eb81f76d3255005aee63b4da396, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,41899,1689743683228 2023-07-19 05:14:54,708 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689743693625.17d63eb81f76d3255005aee63b4da396.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689743694708"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689743694708"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689743694708"}]},"ts":"1689743694708"} 2023-07-19 05:14:54,708 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=74 updating hbase:meta row=e72386175ab3d5172354fe9cee7cced9, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,41979,1689743683435 2023-07-19 05:14:54,708 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689743693625.e72386175ab3d5172354fe9cee7cced9.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689743694708"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689743694708"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689743694708"}]},"ts":"1689743694708"} 2023-07-19 05:14:54,709 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=75, ppid=70, state=RUNNABLE; CloseRegionProcedure 6d8b7852ef4980a4c7cc34d99246dda0, server=jenkins-hbase4.apache.org,41899,1689743683228}] 2023-07-19 05:14:54,710 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=76, ppid=71, state=RUNNABLE; CloseRegionProcedure db6251b838fdff2f42d347dbbd6e9f1e, server=jenkins-hbase4.apache.org,41979,1689743683435}] 2023-07-19 05:14:54,712 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=77, ppid=72, state=RUNNABLE; CloseRegionProcedure f4195847c04bfb9472308f2ce6493ef5, server=jenkins-hbase4.apache.org,41899,1689743683228}] 2023-07-19 05:14:54,713 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=78, ppid=73, state=RUNNABLE; CloseRegionProcedure 17d63eb81f76d3255005aee63b4da396, server=jenkins-hbase4.apache.org,41899,1689743683228}] 2023-07-19 05:14:54,715 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=79, ppid=74, state=RUNNABLE; CloseRegionProcedure e72386175ab3d5172354fe9cee7cced9, server=jenkins-hbase4.apache.org,41979,1689743683435}] 2023-07-19 05:14:54,800 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.MasterRpcServices(1230): Checking to see if procedure is done pid=69 2023-07-19 05:14:54,865 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close db6251b838fdff2f42d347dbbd6e9f1e 2023-07-19 05:14:54,865 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 6d8b7852ef4980a4c7cc34d99246dda0 2023-07-19 05:14:54,867 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing db6251b838fdff2f42d347dbbd6e9f1e, disabling compactions & flushes 2023-07-19 05:14:54,867 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,aaaaa,1689743693625.db6251b838fdff2f42d347dbbd6e9f1e. 2023-07-19 05:14:54,867 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689743693625.db6251b838fdff2f42d347dbbd6e9f1e. 2023-07-19 05:14:54,868 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 6d8b7852ef4980a4c7cc34d99246dda0, disabling compactions & flushes 2023-07-19 05:14:54,868 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689743693625.db6251b838fdff2f42d347dbbd6e9f1e. after waiting 0 ms 2023-07-19 05:14:54,868 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,,1689743693625.6d8b7852ef4980a4c7cc34d99246dda0. 2023-07-19 05:14:54,868 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,aaaaa,1689743693625.db6251b838fdff2f42d347dbbd6e9f1e. 2023-07-19 05:14:54,868 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,,1689743693625.6d8b7852ef4980a4c7cc34d99246dda0. 2023-07-19 05:14:54,868 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,,1689743693625.6d8b7852ef4980a4c7cc34d99246dda0. after waiting 0 ms 2023-07-19 05:14:54,868 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,,1689743693625.6d8b7852ef4980a4c7cc34d99246dda0. 2023-07-19 05:14:54,877 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/data/default/Group_testTableMoveTruncateAndDrop/db6251b838fdff2f42d347dbbd6e9f1e/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-19 05:14:54,877 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/data/default/Group_testTableMoveTruncateAndDrop/6d8b7852ef4980a4c7cc34d99246dda0/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-19 05:14:54,878 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,aaaaa,1689743693625.db6251b838fdff2f42d347dbbd6e9f1e. 2023-07-19 05:14:54,878 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for db6251b838fdff2f42d347dbbd6e9f1e: 2023-07-19 05:14:54,878 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,,1689743693625.6d8b7852ef4980a4c7cc34d99246dda0. 2023-07-19 05:14:54,878 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 6d8b7852ef4980a4c7cc34d99246dda0: 2023-07-19 05:14:54,880 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 6d8b7852ef4980a4c7cc34d99246dda0 2023-07-19 05:14:54,880 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close f4195847c04bfb9472308f2ce6493ef5 2023-07-19 05:14:54,883 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing f4195847c04bfb9472308f2ce6493ef5, disabling compactions & flushes 2023-07-19 05:14:54,883 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689743693625.f4195847c04bfb9472308f2ce6493ef5. 2023-07-19 05:14:54,883 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689743693625.f4195847c04bfb9472308f2ce6493ef5. 2023-07-19 05:14:54,883 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689743693625.f4195847c04bfb9472308f2ce6493ef5. after waiting 0 ms 2023-07-19 05:14:54,883 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689743693625.f4195847c04bfb9472308f2ce6493ef5. 2023-07-19 05:14:54,889 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/data/default/Group_testTableMoveTruncateAndDrop/f4195847c04bfb9472308f2ce6493ef5/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-19 05:14:54,889 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689743693625.f4195847c04bfb9472308f2ce6493ef5. 2023-07-19 05:14:54,889 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for f4195847c04bfb9472308f2ce6493ef5: 2023-07-19 05:14:54,891 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=70 updating hbase:meta row=6d8b7852ef4980a4c7cc34d99246dda0, regionState=CLOSED 2023-07-19 05:14:54,891 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,,1689743693625.6d8b7852ef4980a4c7cc34d99246dda0.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689743694891"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689743694891"}]},"ts":"1689743694891"} 2023-07-19 05:14:54,892 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed db6251b838fdff2f42d347dbbd6e9f1e 2023-07-19 05:14:54,892 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close e72386175ab3d5172354fe9cee7cced9 2023-07-19 05:14:54,893 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing e72386175ab3d5172354fe9cee7cced9, disabling compactions & flushes 2023-07-19 05:14:54,893 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,zzzzz,1689743693625.e72386175ab3d5172354fe9cee7cced9. 2023-07-19 05:14:54,893 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689743693625.e72386175ab3d5172354fe9cee7cced9. 2023-07-19 05:14:54,893 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689743693625.e72386175ab3d5172354fe9cee7cced9. after waiting 0 ms 2023-07-19 05:14:54,893 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,zzzzz,1689743693625.e72386175ab3d5172354fe9cee7cced9. 2023-07-19 05:14:54,895 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=71 updating hbase:meta row=db6251b838fdff2f42d347dbbd6e9f1e, regionState=CLOSED 2023-07-19 05:14:54,895 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed f4195847c04bfb9472308f2ce6493ef5 2023-07-19 05:14:54,895 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 17d63eb81f76d3255005aee63b4da396 2023-07-19 05:14:54,895 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689743693625.db6251b838fdff2f42d347dbbd6e9f1e.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689743694895"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689743694895"}]},"ts":"1689743694895"} 2023-07-19 05:14:54,896 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 17d63eb81f76d3255005aee63b4da396, disabling compactions & flushes 2023-07-19 05:14:54,897 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689743693625.17d63eb81f76d3255005aee63b4da396. 2023-07-19 05:14:54,897 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689743693625.17d63eb81f76d3255005aee63b4da396. 2023-07-19 05:14:54,897 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689743693625.17d63eb81f76d3255005aee63b4da396. after waiting 0 ms 2023-07-19 05:14:54,897 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689743693625.17d63eb81f76d3255005aee63b4da396. 2023-07-19 05:14:54,899 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=72 updating hbase:meta row=f4195847c04bfb9472308f2ce6493ef5, regionState=CLOSED 2023-07-19 05:14:54,899 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689743693625.f4195847c04bfb9472308f2ce6493ef5.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689743694899"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689743694899"}]},"ts":"1689743694899"} 2023-07-19 05:14:54,902 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=75, resume processing ppid=70 2023-07-19 05:14:54,902 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=75, ppid=70, state=SUCCESS; CloseRegionProcedure 6d8b7852ef4980a4c7cc34d99246dda0, server=jenkins-hbase4.apache.org,41899,1689743683228 in 186 msec 2023-07-19 05:14:54,902 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=76, resume processing ppid=71 2023-07-19 05:14:54,902 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=76, ppid=71, state=SUCCESS; CloseRegionProcedure db6251b838fdff2f42d347dbbd6e9f1e, server=jenkins-hbase4.apache.org,41979,1689743683435 in 188 msec 2023-07-19 05:14:54,904 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=70, ppid=69, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=6d8b7852ef4980a4c7cc34d99246dda0, UNASSIGN in 200 msec 2023-07-19 05:14:54,904 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/data/default/Group_testTableMoveTruncateAndDrop/e72386175ab3d5172354fe9cee7cced9/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-19 05:14:54,905 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=77, resume processing ppid=72 2023-07-19 05:14:54,905 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=71, ppid=69, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=db6251b838fdff2f42d347dbbd6e9f1e, UNASSIGN in 200 msec 2023-07-19 05:14:54,905 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=77, ppid=72, state=SUCCESS; CloseRegionProcedure f4195847c04bfb9472308f2ce6493ef5, server=jenkins-hbase4.apache.org,41899,1689743683228 in 189 msec 2023-07-19 05:14:54,905 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/data/default/Group_testTableMoveTruncateAndDrop/17d63eb81f76d3255005aee63b4da396/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-19 05:14:54,905 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,zzzzz,1689743693625.e72386175ab3d5172354fe9cee7cced9. 2023-07-19 05:14:54,905 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for e72386175ab3d5172354fe9cee7cced9: 2023-07-19 05:14:54,906 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689743693625.17d63eb81f76d3255005aee63b4da396. 2023-07-19 05:14:54,906 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 17d63eb81f76d3255005aee63b4da396: 2023-07-19 05:14:54,907 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=72, ppid=69, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=f4195847c04bfb9472308f2ce6493ef5, UNASSIGN in 203 msec 2023-07-19 05:14:54,907 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed e72386175ab3d5172354fe9cee7cced9 2023-07-19 05:14:54,908 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=74 updating hbase:meta row=e72386175ab3d5172354fe9cee7cced9, regionState=CLOSED 2023-07-19 05:14:54,908 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689743693625.e72386175ab3d5172354fe9cee7cced9.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689743694908"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689743694908"}]},"ts":"1689743694908"} 2023-07-19 05:14:54,908 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 17d63eb81f76d3255005aee63b4da396 2023-07-19 05:14:54,909 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=73 updating hbase:meta row=17d63eb81f76d3255005aee63b4da396, regionState=CLOSED 2023-07-19 05:14:54,909 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689743693625.17d63eb81f76d3255005aee63b4da396.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689743694909"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689743694909"}]},"ts":"1689743694909"} 2023-07-19 05:14:54,912 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=79, resume processing ppid=74 2023-07-19 05:14:54,912 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=79, ppid=74, state=SUCCESS; CloseRegionProcedure e72386175ab3d5172354fe9cee7cced9, server=jenkins-hbase4.apache.org,41979,1689743683435 in 194 msec 2023-07-19 05:14:54,913 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=78, resume processing ppid=73 2023-07-19 05:14:54,913 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=78, ppid=73, state=SUCCESS; CloseRegionProcedure 17d63eb81f76d3255005aee63b4da396, server=jenkins-hbase4.apache.org,41899,1689743683228 in 198 msec 2023-07-19 05:14:54,914 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=74, ppid=69, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=e72386175ab3d5172354fe9cee7cced9, UNASSIGN in 210 msec 2023-07-19 05:14:54,915 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=73, resume processing ppid=69 2023-07-19 05:14:54,915 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=73, ppid=69, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=17d63eb81f76d3255005aee63b4da396, UNASSIGN in 211 msec 2023-07-19 05:14:54,915 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689743694915"}]},"ts":"1689743694915"} 2023-07-19 05:14:54,917 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=DISABLED in hbase:meta 2023-07-19 05:14:54,919 INFO [PEWorker-4] procedure.DisableTableProcedure(305): Set Group_testTableMoveTruncateAndDrop to state=DISABLED 2023-07-19 05:14:54,921 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=69, state=SUCCESS; DisableTableProcedure table=Group_testTableMoveTruncateAndDrop in 227 msec 2023-07-19 05:14:55,002 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.MasterRpcServices(1230): Checking to see if procedure is done pid=69 2023-07-19 05:14:55,002 INFO [Listener at localhost/38799] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:Group_testTableMoveTruncateAndDrop, procId: 69 completed 2023-07-19 05:14:55,008 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.HMaster$5(2228): Client=jenkins//172.31.14.131 delete Group_testTableMoveTruncateAndDrop 2023-07-19 05:14:55,015 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] procedure2.ProcedureExecutor(1029): Stored pid=80, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-19 05:14:55,019 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=80, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-19 05:14:55,019 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 'Group_testTableMoveTruncateAndDrop' from rsgroup 'Group_testTableMoveTruncateAndDrop_2021517430' 2023-07-19 05:14:55,021 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=80, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-19 05:14:55,023 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 05:14:55,025 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testTableMoveTruncateAndDrop_2021517430 2023-07-19 05:14:55,026 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-19 05:14:55,026 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-19 05:14:55,039 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/.tmp/data/default/Group_testTableMoveTruncateAndDrop/6d8b7852ef4980a4c7cc34d99246dda0 2023-07-19 05:14:55,039 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/.tmp/data/default/Group_testTableMoveTruncateAndDrop/17d63eb81f76d3255005aee63b4da396 2023-07-19 05:14:55,040 DEBUG [HFileArchiver-5] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/.tmp/data/default/Group_testTableMoveTruncateAndDrop/e72386175ab3d5172354fe9cee7cced9 2023-07-19 05:14:55,039 DEBUG [HFileArchiver-8] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/.tmp/data/default/Group_testTableMoveTruncateAndDrop/f4195847c04bfb9472308f2ce6493ef5 2023-07-19 05:14:55,039 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/.tmp/data/default/Group_testTableMoveTruncateAndDrop/db6251b838fdff2f42d347dbbd6e9f1e 2023-07-19 05:14:55,044 DEBUG [HFileArchiver-3] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/.tmp/data/default/Group_testTableMoveTruncateAndDrop/db6251b838fdff2f42d347dbbd6e9f1e/f, FileablePath, hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/.tmp/data/default/Group_testTableMoveTruncateAndDrop/db6251b838fdff2f42d347dbbd6e9f1e/recovered.edits] 2023-07-19 05:14:55,044 DEBUG [HFileArchiver-2] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/.tmp/data/default/Group_testTableMoveTruncateAndDrop/6d8b7852ef4980a4c7cc34d99246dda0/f, FileablePath, hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/.tmp/data/default/Group_testTableMoveTruncateAndDrop/6d8b7852ef4980a4c7cc34d99246dda0/recovered.edits] 2023-07-19 05:14:55,044 DEBUG [HFileArchiver-8] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/.tmp/data/default/Group_testTableMoveTruncateAndDrop/f4195847c04bfb9472308f2ce6493ef5/f, FileablePath, hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/.tmp/data/default/Group_testTableMoveTruncateAndDrop/f4195847c04bfb9472308f2ce6493ef5/recovered.edits] 2023-07-19 05:14:55,045 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.MasterRpcServices(1230): Checking to see if procedure is done pid=80 2023-07-19 05:14:55,046 DEBUG [HFileArchiver-5] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/.tmp/data/default/Group_testTableMoveTruncateAndDrop/e72386175ab3d5172354fe9cee7cced9/f, FileablePath, hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/.tmp/data/default/Group_testTableMoveTruncateAndDrop/e72386175ab3d5172354fe9cee7cced9/recovered.edits] 2023-07-19 05:14:55,046 DEBUG [HFileArchiver-4] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/.tmp/data/default/Group_testTableMoveTruncateAndDrop/17d63eb81f76d3255005aee63b4da396/f, FileablePath, hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/.tmp/data/default/Group_testTableMoveTruncateAndDrop/17d63eb81f76d3255005aee63b4da396/recovered.edits] 2023-07-19 05:14:55,061 DEBUG [HFileArchiver-3] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/.tmp/data/default/Group_testTableMoveTruncateAndDrop/db6251b838fdff2f42d347dbbd6e9f1e/recovered.edits/4.seqid to hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/archive/data/default/Group_testTableMoveTruncateAndDrop/db6251b838fdff2f42d347dbbd6e9f1e/recovered.edits/4.seqid 2023-07-19 05:14:55,061 DEBUG [HFileArchiver-2] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/.tmp/data/default/Group_testTableMoveTruncateAndDrop/6d8b7852ef4980a4c7cc34d99246dda0/recovered.edits/4.seqid to hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/archive/data/default/Group_testTableMoveTruncateAndDrop/6d8b7852ef4980a4c7cc34d99246dda0/recovered.edits/4.seqid 2023-07-19 05:14:55,061 DEBUG [HFileArchiver-8] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/.tmp/data/default/Group_testTableMoveTruncateAndDrop/f4195847c04bfb9472308f2ce6493ef5/recovered.edits/4.seqid to hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/archive/data/default/Group_testTableMoveTruncateAndDrop/f4195847c04bfb9472308f2ce6493ef5/recovered.edits/4.seqid 2023-07-19 05:14:55,062 DEBUG [HFileArchiver-3] backup.HFileArchiver(596): Deleted hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/.tmp/data/default/Group_testTableMoveTruncateAndDrop/db6251b838fdff2f42d347dbbd6e9f1e 2023-07-19 05:14:55,062 DEBUG [HFileArchiver-8] backup.HFileArchiver(596): Deleted hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/.tmp/data/default/Group_testTableMoveTruncateAndDrop/f4195847c04bfb9472308f2ce6493ef5 2023-07-19 05:14:55,063 DEBUG [HFileArchiver-5] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/.tmp/data/default/Group_testTableMoveTruncateAndDrop/e72386175ab3d5172354fe9cee7cced9/recovered.edits/4.seqid to hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/archive/data/default/Group_testTableMoveTruncateAndDrop/e72386175ab3d5172354fe9cee7cced9/recovered.edits/4.seqid 2023-07-19 05:14:55,063 DEBUG [HFileArchiver-2] backup.HFileArchiver(596): Deleted hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/.tmp/data/default/Group_testTableMoveTruncateAndDrop/6d8b7852ef4980a4c7cc34d99246dda0 2023-07-19 05:14:55,063 DEBUG [HFileArchiver-4] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/.tmp/data/default/Group_testTableMoveTruncateAndDrop/17d63eb81f76d3255005aee63b4da396/recovered.edits/4.seqid to hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/archive/data/default/Group_testTableMoveTruncateAndDrop/17d63eb81f76d3255005aee63b4da396/recovered.edits/4.seqid 2023-07-19 05:14:55,063 DEBUG [HFileArchiver-5] backup.HFileArchiver(596): Deleted hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/.tmp/data/default/Group_testTableMoveTruncateAndDrop/e72386175ab3d5172354fe9cee7cced9 2023-07-19 05:14:55,064 DEBUG [HFileArchiver-4] backup.HFileArchiver(596): Deleted hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/.tmp/data/default/Group_testTableMoveTruncateAndDrop/17d63eb81f76d3255005aee63b4da396 2023-07-19 05:14:55,064 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(328): Archived Group_testTableMoveTruncateAndDrop regions 2023-07-19 05:14:55,067 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=80, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-19 05:14:55,074 WARN [PEWorker-2] procedure.DeleteTableProcedure(384): Deleting some vestigial 5 rows of Group_testTableMoveTruncateAndDrop from hbase:meta 2023-07-19 05:14:55,079 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(421): Removing 'Group_testTableMoveTruncateAndDrop' descriptor. 2023-07-19 05:14:55,081 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=80, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-19 05:14:55,081 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(411): Removing 'Group_testTableMoveTruncateAndDrop' from region states. 2023-07-19 05:14:55,081 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,,1689743693625.6d8b7852ef4980a4c7cc34d99246dda0.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689743695081"}]},"ts":"9223372036854775807"} 2023-07-19 05:14:55,081 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689743693625.db6251b838fdff2f42d347dbbd6e9f1e.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689743695081"}]},"ts":"9223372036854775807"} 2023-07-19 05:14:55,081 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689743693625.f4195847c04bfb9472308f2ce6493ef5.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689743695081"}]},"ts":"9223372036854775807"} 2023-07-19 05:14:55,081 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689743693625.17d63eb81f76d3255005aee63b4da396.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689743695081"}]},"ts":"9223372036854775807"} 2023-07-19 05:14:55,081 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689743693625.e72386175ab3d5172354fe9cee7cced9.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689743695081"}]},"ts":"9223372036854775807"} 2023-07-19 05:14:55,084 INFO [PEWorker-2] hbase.MetaTableAccessor(1788): Deleted 5 regions from META 2023-07-19 05:14:55,084 DEBUG [PEWorker-2] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => 6d8b7852ef4980a4c7cc34d99246dda0, NAME => 'Group_testTableMoveTruncateAndDrop,,1689743693625.6d8b7852ef4980a4c7cc34d99246dda0.', STARTKEY => '', ENDKEY => 'aaaaa'}, {ENCODED => db6251b838fdff2f42d347dbbd6e9f1e, NAME => 'Group_testTableMoveTruncateAndDrop,aaaaa,1689743693625.db6251b838fdff2f42d347dbbd6e9f1e.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'}, {ENCODED => f4195847c04bfb9472308f2ce6493ef5, NAME => 'Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689743693625.f4195847c04bfb9472308f2ce6493ef5.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'}, {ENCODED => 17d63eb81f76d3255005aee63b4da396, NAME => 'Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689743693625.17d63eb81f76d3255005aee63b4da396.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'}, {ENCODED => e72386175ab3d5172354fe9cee7cced9, NAME => 'Group_testTableMoveTruncateAndDrop,zzzzz,1689743693625.e72386175ab3d5172354fe9cee7cced9.', STARTKEY => 'zzzzz', ENDKEY => ''}] 2023-07-19 05:14:55,084 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(415): Marking 'Group_testTableMoveTruncateAndDrop' as deleted. 2023-07-19 05:14:55,084 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1689743695084"}]},"ts":"9223372036854775807"} 2023-07-19 05:14:55,086 INFO [PEWorker-2] hbase.MetaTableAccessor(1658): Deleted table Group_testTableMoveTruncateAndDrop state from META 2023-07-19 05:14:55,089 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(130): Finished pid=80, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-19 05:14:55,091 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=80, state=SUCCESS; DeleteTableProcedure table=Group_testTableMoveTruncateAndDrop in 80 msec 2023-07-19 05:14:55,146 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.MasterRpcServices(1230): Checking to see if procedure is done pid=80 2023-07-19 05:14:55,146 INFO [Listener at localhost/38799] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: default:Group_testTableMoveTruncateAndDrop, procId: 80 completed 2023-07-19 05:14:55,148 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=Group_testTableMoveTruncateAndDrop_2021517430 2023-07-19 05:14:55,148 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-19 05:14:55,152 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=41899] ipc.CallRunner(144): callId: 165 service: ClientService methodName: Scan size: 147 connection: 172.31.14.131:41500 deadline: 1689743755152, exception=org.apache.hadoop.hbase.exceptions.RegionMovedException: Region moved to: hostname=jenkins-hbase4.apache.org port=43237 startCode=1689743687175. As of locationSeqNum=6. 2023-07-19 05:14:55,256 DEBUG [hconnection-0x18ddab65-shared-pool-10] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-19 05:14:55,258 INFO [RS-EventLoopGroup-7-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:57856, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-19 05:14:55,265 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-19 05:14:55,265 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-19 05:14:55,266 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-19 05:14:55,266 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-19 05:14:55,266 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-19 05:14:55,267 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:41899, jenkins-hbase4.apache.org:41979] to rsgroup default 2023-07-19 05:14:55,270 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 05:14:55,270 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testTableMoveTruncateAndDrop_2021517430 2023-07-19 05:14:55,270 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-19 05:14:55,271 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-19 05:14:55,276 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group Group_testTableMoveTruncateAndDrop_2021517430, current retry=0 2023-07-19 05:14:55,276 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,41899,1689743683228, jenkins-hbase4.apache.org,41979,1689743683435] are moved back to Group_testTableMoveTruncateAndDrop_2021517430 2023-07-19 05:14:55,276 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminServer(438): Move servers done: Group_testTableMoveTruncateAndDrop_2021517430 => default 2023-07-19 05:14:55,276 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-19 05:14:55,281 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup Group_testTableMoveTruncateAndDrop_2021517430 2023-07-19 05:14:55,286 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 05:14:55,286 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-19 05:14:55,287 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-19 05:14:55,289 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-19 05:14:55,290 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-19 05:14:55,290 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-19 05:14:55,290 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-19 05:14:55,291 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-19 05:14:55,291 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-19 05:14:55,292 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-19 05:14:55,295 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 05:14:55,296 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-19 05:14:55,297 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-19 05:14:55,301 INFO [Listener at localhost/38799] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-19 05:14:55,301 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-19 05:14:55,303 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 05:14:55,304 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-19 05:14:55,306 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-19 05:14:55,308 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-19 05:14:55,311 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-19 05:14:55,311 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-19 05:14:55,314 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:35853] to rsgroup master 2023-07-19 05:14:55,314 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:35853 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-19 05:14:55,314 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] ipc.CallRunner(144): callId: 149 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:46730 deadline: 1689744895314, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:35853 is either offline or it does not exist. 2023-07-19 05:14:55,315 WARN [Listener at localhost/38799] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:35853 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:35853 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-19 05:14:55,317 INFO [Listener at localhost/38799] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-19 05:14:55,318 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-19 05:14:55,318 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-19 05:14:55,318 INFO [Listener at localhost/38799] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:41899, jenkins-hbase4.apache.org:41979, jenkins-hbase4.apache.org:43237, jenkins-hbase4.apache.org:45681], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-19 05:14:55,319 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-19 05:14:55,319 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-19 05:14:55,344 INFO [Listener at localhost/38799] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testTableMoveTruncateAndDrop Thread=506 (was 419) Potentially hanging thread: hconnection-0x18ddab65-shared-pool-12 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=43237 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43237 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=43237 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp946031351-633 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:54772@0x09433678 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/1181912059.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-3-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-573178852_17 at /127.0.0.1:45792 [Receiving block BP-1580366368-172.31.14.131-1689743677571:blk_1073741843_1019] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:54772@0x09433678-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: PacketResponder: BP-1580366368-172.31.14.131-1689743677571:blk_1073741843_1019, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: HFileArchiver-3 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-7-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_CLOSE_META-regionserver/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x18ddab65-shared-pool-10 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-6 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_965506333_17 at /127.0.0.1:51738 [Receiving block BP-1580366368-172.31.14.131-1689743677571:blk_1073741840_1016] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:3;jenkins-hbase4:43237 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:81) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:64) org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1092) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.runRegionServer(MiniHBaseCluster.java:175) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.access$000(MiniHBaseCluster.java:123) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer$1.run(MiniHBaseCluster.java:159) java.security.AccessController.doPrivileged(Native Method) javax.security.auth.Subject.doAs(Subject.java:360) org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1873) org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:319) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.run(MiniHBaseCluster.java:156) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1580366368-172.31.14.131-1689743677571:blk_1073741843_1019, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-10 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: HFileArchiver-7 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd-prefix:jenkins-hbase4.apache.org,45681,1689743683028.meta sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-573178852_17 at /127.0.0.1:54014 [Receiving block BP-1580366368-172.31.14.131-1689743677571:blk_1073741843_1019] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x18ddab65-shared-pool-8 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-3 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp946031351-631 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/779537082.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-3-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-573178852_17 at /127.0.0.1:51794 [Receiving block BP-1580366368-172.31.14.131-1689743677571:blk_1073741843_1019] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_156935409_17 at /127.0.0.1:57982 [Waiting for operation #4] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RSProcedureDispatcher-pool-2 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-303335306_17 at /127.0.0.1:40664 [Waiting for operation #3] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1580366368-172.31.14.131-1689743677571:blk_1073741843_1019, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: HFileArchiver-4 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-7-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer for 'HBase' metrics system java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: HFileArchiver-5 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=43237 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp946031351-632-acceptor-0@34413262-ServerConnector@35efd609{HTTP/1.1, (http/1.1)}{0.0.0.0:35859} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x18ddab65-shared-pool-9 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1580366368-172.31.14.131-1689743677571:blk_1073741840_1016, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp946031351-634 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-7-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x18ddab65-shared-pool-7 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_965506333_17 at /127.0.0.1:53996 [Receiving block BP-1580366368-172.31.14.131-1689743677571:blk_1073741840_1016] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-5 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x18ddab65-shared-pool-11 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1580366368-172.31.14.131-1689743677571:blk_1073741840_1016, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.metaPriority.FPBQ.Fifo.handler=0,queue=0,port=43237 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: HFileArchiver-6 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-4 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-9 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-7 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd-prefix:jenkins-hbase4.apache.org,43237,1689743687175 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1000085093_17 at /127.0.0.1:51356 [Waiting for operation #5] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp946031351-635 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp946031351-638 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=43237 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RS:3;jenkins-hbase4:43237-longCompactions-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.util.StealJobQueue.take(StealJobQueue.java:101) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Async disk worker #0 for volume /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/a5b3b8b6-cf4e-9609-f882-6066e741168c/cluster_bf8aab3d-fc29-3d9a-7b1d-919a6995935e/dfs/data/data1/current sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: jenkins-hbase4:43237Replication Statistics #0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: HFileArchiver-8 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_965506333_17 at /127.0.0.1:45730 [Receiving block BP-1580366368-172.31.14.131-1689743677571:blk_1073741840_1016] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x5b070797-shared-pool-4 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x5b070797-shared-pool-3 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1580366368-172.31.14.131-1689743677571:blk_1073741840_1016, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=43237 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-8 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp946031351-637 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43237 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Session-HouseKeeper-3cbddc65-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=43237 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: Async disk worker #0 for volume /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/a5b3b8b6-cf4e-9609-f882-6066e741168c/cluster_bf8aab3d-fc29-3d9a-7b1d-919a6995935e/dfs/data/data2/current sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp946031351-636 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=43237 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: IPC Client (340318198) connection to localhost/127.0.0.1:34189 from jenkins.hfs.3 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: LeaseRenewer:jenkins.hfs.3@localhost:34189 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:54772@0x09433678-SendThread(127.0.0.1:54772) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) - Thread LEAK? -, OpenFileDescriptor=814 (was 680) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=355 (was 316) - SystemLoadAverage LEAK? -, ProcessCount=173 (was 173), AvailableMemoryMB=3325 (was 3723) 2023-07-19 05:14:55,346 WARN [Listener at localhost/38799] hbase.ResourceChecker(130): Thread=506 is superior to 500 2023-07-19 05:14:55,363 INFO [Listener at localhost/38799] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testValidGroupNames Thread=506, OpenFileDescriptor=814, MaxFileDescriptor=60000, SystemLoadAverage=355, ProcessCount=173, AvailableMemoryMB=3324 2023-07-19 05:14:55,363 WARN [Listener at localhost/38799] hbase.ResourceChecker(130): Thread=506 is superior to 500 2023-07-19 05:14:55,363 INFO [Listener at localhost/38799] rsgroup.TestRSGroupsBase(132): testValidGroupNames 2023-07-19 05:14:55,369 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-19 05:14:55,369 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-19 05:14:55,370 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-19 05:14:55,370 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-19 05:14:55,370 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-19 05:14:55,371 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-19 05:14:55,371 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-19 05:14:55,372 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-19 05:14:55,376 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 05:14:55,377 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-19 05:14:55,384 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-19 05:14:55,391 INFO [Listener at localhost/38799] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-19 05:14:55,395 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-19 05:14:55,399 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 05:14:55,400 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-19 05:14:55,402 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-19 05:14:55,404 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-19 05:14:55,414 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-19 05:14:55,414 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-19 05:14:55,416 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:35853] to rsgroup master 2023-07-19 05:14:55,416 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:35853 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-19 05:14:55,416 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] ipc.CallRunner(144): callId: 177 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:46730 deadline: 1689744895416, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:35853 is either offline or it does not exist. 2023-07-19 05:14:55,417 WARN [Listener at localhost/38799] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:35853 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:35853 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-19 05:14:55,419 INFO [Listener at localhost/38799] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-19 05:14:55,419 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-19 05:14:55,419 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-19 05:14:55,420 INFO [Listener at localhost/38799] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:41899, jenkins-hbase4.apache.org:41979, jenkins-hbase4.apache.org:43237, jenkins-hbase4.apache.org:45681], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-19 05:14:55,420 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-19 05:14:55,421 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-19 05:14:55,421 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup foo* 2023-07-19 05:14:55,422 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup name should only contain alphanumeric characters at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.checkGroupName(RSGroupInfoManagerImpl.java:932) at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.addRSGroup(RSGroupInfoManagerImpl.java:205) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.addRSGroup(RSGroupAdminServer.java:476) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.addRSGroup(RSGroupAdminEndpoint.java:258) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16203) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-19 05:14:55,422 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] ipc.CallRunner(144): callId: 183 service: MasterService methodName: ExecMasterService size: 83 connection: 172.31.14.131:46730 deadline: 1689744895421, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup name should only contain alphanumeric characters 2023-07-19 05:14:55,423 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup foo@ 2023-07-19 05:14:55,423 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup name should only contain alphanumeric characters at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.checkGroupName(RSGroupInfoManagerImpl.java:932) at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.addRSGroup(RSGroupInfoManagerImpl.java:205) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.addRSGroup(RSGroupAdminServer.java:476) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.addRSGroup(RSGroupAdminEndpoint.java:258) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16203) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-19 05:14:55,423 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] ipc.CallRunner(144): callId: 185 service: MasterService methodName: ExecMasterService size: 83 connection: 172.31.14.131:46730 deadline: 1689744895423, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup name should only contain alphanumeric characters 2023-07-19 05:14:55,424 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup - 2023-07-19 05:14:55,424 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup name should only contain alphanumeric characters at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.checkGroupName(RSGroupInfoManagerImpl.java:932) at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.addRSGroup(RSGroupInfoManagerImpl.java:205) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.addRSGroup(RSGroupAdminServer.java:476) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.addRSGroup(RSGroupAdminEndpoint.java:258) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16203) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-19 05:14:55,424 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] ipc.CallRunner(144): callId: 187 service: MasterService methodName: ExecMasterService size: 80 connection: 172.31.14.131:46730 deadline: 1689744895424, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup name should only contain alphanumeric characters 2023-07-19 05:14:55,425 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup foo_123 2023-07-19 05:14:55,428 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/foo_123 2023-07-19 05:14:55,429 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 05:14:55,430 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-19 05:14:55,430 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-19 05:14:55,432 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-19 05:14:55,435 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-19 05:14:55,436 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-19 05:14:55,440 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-19 05:14:55,441 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-19 05:14:55,442 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-19 05:14:55,442 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-19 05:14:55,442 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-19 05:14:55,443 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-19 05:14:55,443 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-19 05:14:55,444 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup foo_123 2023-07-19 05:14:55,447 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 05:14:55,448 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-19 05:14:55,448 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-19 05:14:55,449 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-19 05:14:55,450 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-19 05:14:55,450 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-19 05:14:55,450 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-19 05:14:55,451 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-19 05:14:55,451 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-19 05:14:55,452 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-19 05:14:55,455 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 05:14:55,455 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-19 05:14:55,464 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-19 05:14:55,467 INFO [Listener at localhost/38799] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-19 05:14:55,468 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-19 05:14:55,470 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 05:14:55,471 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-19 05:14:55,474 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-19 05:14:55,475 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-19 05:14:55,479 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-19 05:14:55,479 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-19 05:14:55,481 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:35853] to rsgroup master 2023-07-19 05:14:55,481 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:35853 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-19 05:14:55,481 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] ipc.CallRunner(144): callId: 221 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:46730 deadline: 1689744895481, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:35853 is either offline or it does not exist. 2023-07-19 05:14:55,481 WARN [Listener at localhost/38799] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:35853 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:35853 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-19 05:14:55,483 INFO [Listener at localhost/38799] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-19 05:14:55,484 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-19 05:14:55,484 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-19 05:14:55,484 INFO [Listener at localhost/38799] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:41899, jenkins-hbase4.apache.org:41979, jenkins-hbase4.apache.org:43237, jenkins-hbase4.apache.org:45681], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-19 05:14:55,485 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-19 05:14:55,485 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-19 05:14:55,503 INFO [Listener at localhost/38799] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testValidGroupNames Thread=509 (was 506) Potentially hanging thread: hconnection-0x5b070797-shared-pool-7 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x5b070797-shared-pool-5 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x5b070797-shared-pool-6 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=814 (was 814), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=355 (was 355), ProcessCount=173 (was 173), AvailableMemoryMB=3324 (was 3324) 2023-07-19 05:14:55,503 WARN [Listener at localhost/38799] hbase.ResourceChecker(130): Thread=509 is superior to 500 2023-07-19 05:14:55,520 INFO [Listener at localhost/38799] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testFailRemoveGroup Thread=509, OpenFileDescriptor=814, MaxFileDescriptor=60000, SystemLoadAverage=355, ProcessCount=173, AvailableMemoryMB=3323 2023-07-19 05:14:55,520 WARN [Listener at localhost/38799] hbase.ResourceChecker(130): Thread=509 is superior to 500 2023-07-19 05:14:55,521 INFO [Listener at localhost/38799] rsgroup.TestRSGroupsBase(132): testFailRemoveGroup 2023-07-19 05:14:55,526 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-19 05:14:55,526 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-19 05:14:55,527 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-19 05:14:55,527 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-19 05:14:55,527 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-19 05:14:55,528 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-19 05:14:55,528 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-19 05:14:55,529 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-19 05:14:55,533 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 05:14:55,533 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-19 05:14:55,535 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-19 05:14:55,539 INFO [Listener at localhost/38799] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-19 05:14:55,540 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-19 05:14:55,543 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 05:14:55,544 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-19 05:14:55,546 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-19 05:14:55,549 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-19 05:14:55,554 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-19 05:14:55,554 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-19 05:14:55,558 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:35853] to rsgroup master 2023-07-19 05:14:55,559 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:35853 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-19 05:14:55,559 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] ipc.CallRunner(144): callId: 249 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:46730 deadline: 1689744895558, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:35853 is either offline or it does not exist. 2023-07-19 05:14:55,559 WARN [Listener at localhost/38799] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:35853 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:35853 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-19 05:14:55,561 INFO [Listener at localhost/38799] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-19 05:14:55,562 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-19 05:14:55,562 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-19 05:14:55,563 INFO [Listener at localhost/38799] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:41899, jenkins-hbase4.apache.org:41979, jenkins-hbase4.apache.org:43237, jenkins-hbase4.apache.org:45681], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-19 05:14:55,564 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-19 05:14:55,564 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-19 05:14:55,565 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-19 05:14:55,565 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-19 05:14:55,566 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-19 05:14:55,567 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-19 05:14:55,568 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup bar 2023-07-19 05:14:55,571 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 05:14:55,571 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/bar 2023-07-19 05:14:55,573 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-19 05:14:55,574 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-19 05:14:55,575 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-19 05:14:55,579 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-19 05:14:55,579 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-19 05:14:55,582 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:43237, jenkins-hbase4.apache.org:41899, jenkins-hbase4.apache.org:41979] to rsgroup bar 2023-07-19 05:14:55,586 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 05:14:55,586 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/bar 2023-07-19 05:14:55,587 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-19 05:14:55,587 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-19 05:14:55,590 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminServer(238): Moving server region f6f6fceaa7e24dc750aa525625e896fa, which do not belong to RSGroup bar 2023-07-19 05:14:55,591 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] procedure2.ProcedureExecutor(1029): Stored pid=81, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:namespace, region=f6f6fceaa7e24dc750aa525625e896fa, REOPEN/MOVE 2023-07-19 05:14:55,591 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminServer(286): Moving 1 region(s) to group default, current retry=0 2023-07-19 05:14:55,593 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=81, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:namespace, region=f6f6fceaa7e24dc750aa525625e896fa, REOPEN/MOVE 2023-07-19 05:14:55,596 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=81 updating hbase:meta row=f6f6fceaa7e24dc750aa525625e896fa, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,43237,1689743687175 2023-07-19 05:14:55,596 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1689743686152.f6f6fceaa7e24dc750aa525625e896fa.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689743695596"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689743695596"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689743695596"}]},"ts":"1689743695596"} 2023-07-19 05:14:55,598 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=82, ppid=81, state=RUNNABLE; CloseRegionProcedure f6f6fceaa7e24dc750aa525625e896fa, server=jenkins-hbase4.apache.org,43237,1689743687175}] 2023-07-19 05:14:55,751 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close f6f6fceaa7e24dc750aa525625e896fa 2023-07-19 05:14:55,753 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing f6f6fceaa7e24dc750aa525625e896fa, disabling compactions & flushes 2023-07-19 05:14:55,753 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1689743686152.f6f6fceaa7e24dc750aa525625e896fa. 2023-07-19 05:14:55,753 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1689743686152.f6f6fceaa7e24dc750aa525625e896fa. 2023-07-19 05:14:55,753 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1689743686152.f6f6fceaa7e24dc750aa525625e896fa. after waiting 0 ms 2023-07-19 05:14:55,753 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1689743686152.f6f6fceaa7e24dc750aa525625e896fa. 2023-07-19 05:14:55,761 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/data/hbase/namespace/f6f6fceaa7e24dc750aa525625e896fa/recovered.edits/12.seqid, newMaxSeqId=12, maxSeqId=9 2023-07-19 05:14:55,762 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:namespace,,1689743686152.f6f6fceaa7e24dc750aa525625e896fa. 2023-07-19 05:14:55,763 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for f6f6fceaa7e24dc750aa525625e896fa: 2023-07-19 05:14:55,763 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding f6f6fceaa7e24dc750aa525625e896fa move to jenkins-hbase4.apache.org,45681,1689743683028 record at close sequenceid=10 2023-07-19 05:14:55,765 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed f6f6fceaa7e24dc750aa525625e896fa 2023-07-19 05:14:55,765 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=81 updating hbase:meta row=f6f6fceaa7e24dc750aa525625e896fa, regionState=CLOSED 2023-07-19 05:14:55,766 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"hbase:namespace,,1689743686152.f6f6fceaa7e24dc750aa525625e896fa.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689743695765"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689743695765"}]},"ts":"1689743695765"} 2023-07-19 05:14:55,771 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=82, resume processing ppid=81 2023-07-19 05:14:55,771 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=82, ppid=81, state=SUCCESS; CloseRegionProcedure f6f6fceaa7e24dc750aa525625e896fa, server=jenkins-hbase4.apache.org,43237,1689743687175 in 171 msec 2023-07-19 05:14:55,772 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=81, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:namespace, region=f6f6fceaa7e24dc750aa525625e896fa, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,45681,1689743683028; forceNewPlan=false, retain=false 2023-07-19 05:14:55,922 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=81 updating hbase:meta row=f6f6fceaa7e24dc750aa525625e896fa, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,45681,1689743683028 2023-07-19 05:14:55,923 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1689743686152.f6f6fceaa7e24dc750aa525625e896fa.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689743695922"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689743695922"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689743695922"}]},"ts":"1689743695922"} 2023-07-19 05:14:55,925 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=83, ppid=81, state=RUNNABLE; OpenRegionProcedure f6f6fceaa7e24dc750aa525625e896fa, server=jenkins-hbase4.apache.org,45681,1689743683028}] 2023-07-19 05:14:56,086 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:namespace,,1689743686152.f6f6fceaa7e24dc750aa525625e896fa. 2023-07-19 05:14:56,086 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => f6f6fceaa7e24dc750aa525625e896fa, NAME => 'hbase:namespace,,1689743686152.f6f6fceaa7e24dc750aa525625e896fa.', STARTKEY => '', ENDKEY => ''} 2023-07-19 05:14:56,087 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table namespace f6f6fceaa7e24dc750aa525625e896fa 2023-07-19 05:14:56,087 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1689743686152.f6f6fceaa7e24dc750aa525625e896fa.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-19 05:14:56,087 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for f6f6fceaa7e24dc750aa525625e896fa 2023-07-19 05:14:56,087 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for f6f6fceaa7e24dc750aa525625e896fa 2023-07-19 05:14:56,089 INFO [StoreOpener-f6f6fceaa7e24dc750aa525625e896fa-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region f6f6fceaa7e24dc750aa525625e896fa 2023-07-19 05:14:56,091 DEBUG [StoreOpener-f6f6fceaa7e24dc750aa525625e896fa-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/data/hbase/namespace/f6f6fceaa7e24dc750aa525625e896fa/info 2023-07-19 05:14:56,091 DEBUG [StoreOpener-f6f6fceaa7e24dc750aa525625e896fa-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/data/hbase/namespace/f6f6fceaa7e24dc750aa525625e896fa/info 2023-07-19 05:14:56,095 INFO [StoreOpener-f6f6fceaa7e24dc750aa525625e896fa-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region f6f6fceaa7e24dc750aa525625e896fa columnFamilyName info 2023-07-19 05:14:56,110 DEBUG [StoreOpener-f6f6fceaa7e24dc750aa525625e896fa-1] regionserver.HStore(539): loaded hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/data/hbase/namespace/f6f6fceaa7e24dc750aa525625e896fa/info/b0b2cea6d8eb47f8900a07a5b8fd22cf 2023-07-19 05:14:56,110 INFO [StoreOpener-f6f6fceaa7e24dc750aa525625e896fa-1] regionserver.HStore(310): Store=f6f6fceaa7e24dc750aa525625e896fa/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-19 05:14:56,111 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/data/hbase/namespace/f6f6fceaa7e24dc750aa525625e896fa 2023-07-19 05:14:56,114 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/data/hbase/namespace/f6f6fceaa7e24dc750aa525625e896fa 2023-07-19 05:14:56,119 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for f6f6fceaa7e24dc750aa525625e896fa 2023-07-19 05:14:56,122 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened f6f6fceaa7e24dc750aa525625e896fa; next sequenceid=13; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11501333280, jitterRate=0.07114513218402863}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-19 05:14:56,122 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for f6f6fceaa7e24dc750aa525625e896fa: 2023-07-19 05:14:56,123 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:namespace,,1689743686152.f6f6fceaa7e24dc750aa525625e896fa., pid=83, masterSystemTime=1689743696078 2023-07-19 05:14:56,126 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:namespace,,1689743686152.f6f6fceaa7e24dc750aa525625e896fa. 2023-07-19 05:14:56,126 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:namespace,,1689743686152.f6f6fceaa7e24dc750aa525625e896fa. 2023-07-19 05:14:56,126 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=81 updating hbase:meta row=f6f6fceaa7e24dc750aa525625e896fa, regionState=OPEN, openSeqNum=13, regionLocation=jenkins-hbase4.apache.org,45681,1689743683028 2023-07-19 05:14:56,127 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:namespace,,1689743686152.f6f6fceaa7e24dc750aa525625e896fa.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689743696126"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689743696126"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689743696126"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689743696126"}]},"ts":"1689743696126"} 2023-07-19 05:14:56,132 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=83, resume processing ppid=81 2023-07-19 05:14:56,132 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=83, ppid=81, state=SUCCESS; OpenRegionProcedure f6f6fceaa7e24dc750aa525625e896fa, server=jenkins-hbase4.apache.org,45681,1689743683028 in 204 msec 2023-07-19 05:14:56,135 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=81, state=SUCCESS; TransitRegionStateProcedure table=hbase:namespace, region=f6f6fceaa7e24dc750aa525625e896fa, REOPEN/MOVE in 542 msec 2023-07-19 05:14:56,580 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-hbase.properties,hadoop-metrics2.properties 2023-07-19 05:14:56,593 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] procedure.ProcedureSyncWait(216): waitFor pid=81 2023-07-19 05:14:56,594 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,41899,1689743683228, jenkins-hbase4.apache.org,41979,1689743683435, jenkins-hbase4.apache.org,43237,1689743687175] are moved back to default 2023-07-19 05:14:56,594 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminServer(438): Move servers done: default => bar 2023-07-19 05:14:56,594 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-19 05:14:56,598 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-19 05:14:56,599 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-19 05:14:56,602 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=bar 2023-07-19 05:14:56,602 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-19 05:14:56,604 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'Group_testFailRemoveGroup', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-19 05:14:56,606 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] procedure2.ProcedureExecutor(1029): Stored pid=84, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=Group_testFailRemoveGroup 2023-07-19 05:14:56,608 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=84, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=Group_testFailRemoveGroup execute state=CREATE_TABLE_PRE_OPERATION 2023-07-19 05:14:56,608 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "Group_testFailRemoveGroup" procId is: 84 2023-07-19 05:14:56,610 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.MasterRpcServices(1230): Checking to see if procedure is done pid=84 2023-07-19 05:14:56,612 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 05:14:56,612 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/bar 2023-07-19 05:14:56,613 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-19 05:14:56,614 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-19 05:14:56,619 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=84, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=Group_testFailRemoveGroup execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-19 05:14:56,626 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/.tmp/data/default/Group_testFailRemoveGroup/070bdaa29426a2645f0b005a91b8c572 2023-07-19 05:14:56,627 DEBUG [HFileArchiver-1] backup.HFileArchiver(153): Directory hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/.tmp/data/default/Group_testFailRemoveGroup/070bdaa29426a2645f0b005a91b8c572 empty. 2023-07-19 05:14:56,628 DEBUG [HFileArchiver-1] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/.tmp/data/default/Group_testFailRemoveGroup/070bdaa29426a2645f0b005a91b8c572 2023-07-19 05:14:56,628 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(328): Archived Group_testFailRemoveGroup regions 2023-07-19 05:14:56,710 DEBUG [PEWorker-4] util.FSTableDescriptors(570): Wrote into hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/.tmp/data/default/Group_testFailRemoveGroup/.tabledesc/.tableinfo.0000000001 2023-07-19 05:14:56,711 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.MasterRpcServices(1230): Checking to see if procedure is done pid=84 2023-07-19 05:14:56,712 INFO [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(7675): creating {ENCODED => 070bdaa29426a2645f0b005a91b8c572, NAME => 'Group_testFailRemoveGroup,,1689743696604.070bdaa29426a2645f0b005a91b8c572.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='Group_testFailRemoveGroup', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/.tmp 2023-07-19 05:14:56,747 DEBUG [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(866): Instantiated Group_testFailRemoveGroup,,1689743696604.070bdaa29426a2645f0b005a91b8c572.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-19 05:14:56,747 DEBUG [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(1604): Closing 070bdaa29426a2645f0b005a91b8c572, disabling compactions & flushes 2023-07-19 05:14:56,747 INFO [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(1626): Closing region Group_testFailRemoveGroup,,1689743696604.070bdaa29426a2645f0b005a91b8c572. 2023-07-19 05:14:56,747 DEBUG [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testFailRemoveGroup,,1689743696604.070bdaa29426a2645f0b005a91b8c572. 2023-07-19 05:14:56,747 DEBUG [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_testFailRemoveGroup,,1689743696604.070bdaa29426a2645f0b005a91b8c572. after waiting 0 ms 2023-07-19 05:14:56,747 DEBUG [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_testFailRemoveGroup,,1689743696604.070bdaa29426a2645f0b005a91b8c572. 2023-07-19 05:14:56,747 INFO [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(1838): Closed Group_testFailRemoveGroup,,1689743696604.070bdaa29426a2645f0b005a91b8c572. 2023-07-19 05:14:56,747 DEBUG [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(1558): Region close journal for 070bdaa29426a2645f0b005a91b8c572: 2023-07-19 05:14:56,750 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=84, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=Group_testFailRemoveGroup execute state=CREATE_TABLE_ADD_TO_META 2023-07-19 05:14:56,751 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testFailRemoveGroup,,1689743696604.070bdaa29426a2645f0b005a91b8c572.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689743696751"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689743696751"}]},"ts":"1689743696751"} 2023-07-19 05:14:56,753 INFO [PEWorker-4] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-19 05:14:56,755 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=84, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=Group_testFailRemoveGroup execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-19 05:14:56,755 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testFailRemoveGroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689743696755"}]},"ts":"1689743696755"} 2023-07-19 05:14:56,756 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=Group_testFailRemoveGroup, state=ENABLING in hbase:meta 2023-07-19 05:14:56,763 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=85, ppid=84, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=070bdaa29426a2645f0b005a91b8c572, ASSIGN}] 2023-07-19 05:14:56,764 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=85, ppid=84, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=070bdaa29426a2645f0b005a91b8c572, ASSIGN 2023-07-19 05:14:56,765 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=85, ppid=84, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=070bdaa29426a2645f0b005a91b8c572, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,45681,1689743683028; forceNewPlan=false, retain=false 2023-07-19 05:14:56,913 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.MasterRpcServices(1230): Checking to see if procedure is done pid=84 2023-07-19 05:14:56,917 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=85 updating hbase:meta row=070bdaa29426a2645f0b005a91b8c572, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,45681,1689743683028 2023-07-19 05:14:56,917 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testFailRemoveGroup,,1689743696604.070bdaa29426a2645f0b005a91b8c572.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689743696916"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689743696916"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689743696916"}]},"ts":"1689743696916"} 2023-07-19 05:14:56,919 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=86, ppid=85, state=RUNNABLE; OpenRegionProcedure 070bdaa29426a2645f0b005a91b8c572, server=jenkins-hbase4.apache.org,45681,1689743683028}] 2023-07-19 05:14:57,075 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testFailRemoveGroup,,1689743696604.070bdaa29426a2645f0b005a91b8c572. 2023-07-19 05:14:57,075 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 070bdaa29426a2645f0b005a91b8c572, NAME => 'Group_testFailRemoveGroup,,1689743696604.070bdaa29426a2645f0b005a91b8c572.', STARTKEY => '', ENDKEY => ''} 2023-07-19 05:14:57,076 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testFailRemoveGroup 070bdaa29426a2645f0b005a91b8c572 2023-07-19 05:14:57,076 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testFailRemoveGroup,,1689743696604.070bdaa29426a2645f0b005a91b8c572.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-19 05:14:57,076 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 070bdaa29426a2645f0b005a91b8c572 2023-07-19 05:14:57,076 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 070bdaa29426a2645f0b005a91b8c572 2023-07-19 05:14:57,079 INFO [StoreOpener-070bdaa29426a2645f0b005a91b8c572-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 070bdaa29426a2645f0b005a91b8c572 2023-07-19 05:14:57,081 DEBUG [StoreOpener-070bdaa29426a2645f0b005a91b8c572-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/data/default/Group_testFailRemoveGroup/070bdaa29426a2645f0b005a91b8c572/f 2023-07-19 05:14:57,081 DEBUG [StoreOpener-070bdaa29426a2645f0b005a91b8c572-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/data/default/Group_testFailRemoveGroup/070bdaa29426a2645f0b005a91b8c572/f 2023-07-19 05:14:57,081 INFO [StoreOpener-070bdaa29426a2645f0b005a91b8c572-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 070bdaa29426a2645f0b005a91b8c572 columnFamilyName f 2023-07-19 05:14:57,082 INFO [StoreOpener-070bdaa29426a2645f0b005a91b8c572-1] regionserver.HStore(310): Store=070bdaa29426a2645f0b005a91b8c572/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-19 05:14:57,083 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/data/default/Group_testFailRemoveGroup/070bdaa29426a2645f0b005a91b8c572 2023-07-19 05:14:57,083 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/data/default/Group_testFailRemoveGroup/070bdaa29426a2645f0b005a91b8c572 2023-07-19 05:14:57,087 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 070bdaa29426a2645f0b005a91b8c572 2023-07-19 05:14:57,092 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/data/default/Group_testFailRemoveGroup/070bdaa29426a2645f0b005a91b8c572/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-19 05:14:57,093 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 070bdaa29426a2645f0b005a91b8c572; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9637200000, jitterRate=-0.10246580839157104}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-19 05:14:57,093 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 070bdaa29426a2645f0b005a91b8c572: 2023-07-19 05:14:57,094 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testFailRemoveGroup,,1689743696604.070bdaa29426a2645f0b005a91b8c572., pid=86, masterSystemTime=1689743697071 2023-07-19 05:14:57,097 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testFailRemoveGroup,,1689743696604.070bdaa29426a2645f0b005a91b8c572. 2023-07-19 05:14:57,097 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testFailRemoveGroup,,1689743696604.070bdaa29426a2645f0b005a91b8c572. 2023-07-19 05:14:57,097 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=85 updating hbase:meta row=070bdaa29426a2645f0b005a91b8c572, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,45681,1689743683028 2023-07-19 05:14:57,098 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testFailRemoveGroup,,1689743696604.070bdaa29426a2645f0b005a91b8c572.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689743697097"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689743697097"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689743697097"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689743697097"}]},"ts":"1689743697097"} 2023-07-19 05:14:57,101 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=86, resume processing ppid=85 2023-07-19 05:14:57,101 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=86, ppid=85, state=SUCCESS; OpenRegionProcedure 070bdaa29426a2645f0b005a91b8c572, server=jenkins-hbase4.apache.org,45681,1689743683028 in 180 msec 2023-07-19 05:14:57,104 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=85, resume processing ppid=84 2023-07-19 05:14:57,104 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=85, ppid=84, state=SUCCESS; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=070bdaa29426a2645f0b005a91b8c572, ASSIGN in 339 msec 2023-07-19 05:14:57,105 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=84, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=Group_testFailRemoveGroup execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-19 05:14:57,105 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testFailRemoveGroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689743697105"}]},"ts":"1689743697105"} 2023-07-19 05:14:57,108 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=Group_testFailRemoveGroup, state=ENABLED in hbase:meta 2023-07-19 05:14:57,112 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=84, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=Group_testFailRemoveGroup execute state=CREATE_TABLE_POST_OPERATION 2023-07-19 05:14:57,113 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=84, state=SUCCESS; CreateTableProcedure table=Group_testFailRemoveGroup in 508 msec 2023-07-19 05:14:57,214 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.MasterRpcServices(1230): Checking to see if procedure is done pid=84 2023-07-19 05:14:57,215 INFO [Listener at localhost/38799] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:Group_testFailRemoveGroup, procId: 84 completed 2023-07-19 05:14:57,215 DEBUG [Listener at localhost/38799] hbase.HBaseTestingUtility(3430): Waiting until all regions of table Group_testFailRemoveGroup get assigned. Timeout = 60000ms 2023-07-19 05:14:57,215 INFO [Listener at localhost/38799] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-19 05:14:57,221 INFO [Listener at localhost/38799] hbase.HBaseTestingUtility(3484): All regions for table Group_testFailRemoveGroup assigned to meta. Checking AM states. 2023-07-19 05:14:57,221 INFO [Listener at localhost/38799] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-19 05:14:57,221 INFO [Listener at localhost/38799] hbase.HBaseTestingUtility(3504): All regions for table Group_testFailRemoveGroup assigned. 2023-07-19 05:14:57,224 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [Group_testFailRemoveGroup] to rsgroup bar 2023-07-19 05:14:57,226 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 05:14:57,227 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/bar 2023-07-19 05:14:57,227 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-19 05:14:57,227 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-19 05:14:57,229 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminServer(339): Moving region(s) for table Group_testFailRemoveGroup to RSGroup bar 2023-07-19 05:14:57,229 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminServer(345): Moving region 070bdaa29426a2645f0b005a91b8c572 to RSGroup bar 2023-07-19 05:14:57,229 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-19 05:14:57,229 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-19 05:14:57,230 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-19 05:14:57,230 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-19 05:14:57,230 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] balancer.BaseLoadBalancer$Cluster(362): server 3 is on host 0 2023-07-19 05:14:57,230 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-19 05:14:57,231 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] procedure2.ProcedureExecutor(1029): Stored pid=87, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=070bdaa29426a2645f0b005a91b8c572, REOPEN/MOVE 2023-07-19 05:14:57,231 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminServer(286): Moving 1 region(s) to group bar, current retry=0 2023-07-19 05:14:57,232 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=87, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=070bdaa29426a2645f0b005a91b8c572, REOPEN/MOVE 2023-07-19 05:14:57,233 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=87 updating hbase:meta row=070bdaa29426a2645f0b005a91b8c572, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,45681,1689743683028 2023-07-19 05:14:57,233 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testFailRemoveGroup,,1689743696604.070bdaa29426a2645f0b005a91b8c572.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689743697233"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689743697233"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689743697233"}]},"ts":"1689743697233"} 2023-07-19 05:14:57,238 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=88, ppid=87, state=RUNNABLE; CloseRegionProcedure 070bdaa29426a2645f0b005a91b8c572, server=jenkins-hbase4.apache.org,45681,1689743683028}] 2023-07-19 05:14:57,391 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 070bdaa29426a2645f0b005a91b8c572 2023-07-19 05:14:57,394 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 070bdaa29426a2645f0b005a91b8c572, disabling compactions & flushes 2023-07-19 05:14:57,394 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testFailRemoveGroup,,1689743696604.070bdaa29426a2645f0b005a91b8c572. 2023-07-19 05:14:57,394 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testFailRemoveGroup,,1689743696604.070bdaa29426a2645f0b005a91b8c572. 2023-07-19 05:14:57,394 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testFailRemoveGroup,,1689743696604.070bdaa29426a2645f0b005a91b8c572. after waiting 0 ms 2023-07-19 05:14:57,394 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testFailRemoveGroup,,1689743696604.070bdaa29426a2645f0b005a91b8c572. 2023-07-19 05:14:57,399 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/data/default/Group_testFailRemoveGroup/070bdaa29426a2645f0b005a91b8c572/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-19 05:14:57,399 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testFailRemoveGroup,,1689743696604.070bdaa29426a2645f0b005a91b8c572. 2023-07-19 05:14:57,399 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 070bdaa29426a2645f0b005a91b8c572: 2023-07-19 05:14:57,399 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 070bdaa29426a2645f0b005a91b8c572 move to jenkins-hbase4.apache.org,41979,1689743683435 record at close sequenceid=2 2023-07-19 05:14:57,401 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 070bdaa29426a2645f0b005a91b8c572 2023-07-19 05:14:57,402 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=87 updating hbase:meta row=070bdaa29426a2645f0b005a91b8c572, regionState=CLOSED 2023-07-19 05:14:57,402 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testFailRemoveGroup,,1689743696604.070bdaa29426a2645f0b005a91b8c572.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689743697402"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689743697402"}]},"ts":"1689743697402"} 2023-07-19 05:14:57,406 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=88, resume processing ppid=87 2023-07-19 05:14:57,406 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=88, ppid=87, state=SUCCESS; CloseRegionProcedure 070bdaa29426a2645f0b005a91b8c572, server=jenkins-hbase4.apache.org,45681,1689743683028 in 169 msec 2023-07-19 05:14:57,406 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=87, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=070bdaa29426a2645f0b005a91b8c572, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,41979,1689743683435; forceNewPlan=false, retain=false 2023-07-19 05:14:57,557 INFO [jenkins-hbase4:35853] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-19 05:14:57,557 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=87 updating hbase:meta row=070bdaa29426a2645f0b005a91b8c572, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,41979,1689743683435 2023-07-19 05:14:57,557 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testFailRemoveGroup,,1689743696604.070bdaa29426a2645f0b005a91b8c572.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689743697557"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689743697557"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689743697557"}]},"ts":"1689743697557"} 2023-07-19 05:14:57,559 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=89, ppid=87, state=RUNNABLE; OpenRegionProcedure 070bdaa29426a2645f0b005a91b8c572, server=jenkins-hbase4.apache.org,41979,1689743683435}] 2023-07-19 05:14:57,716 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testFailRemoveGroup,,1689743696604.070bdaa29426a2645f0b005a91b8c572. 2023-07-19 05:14:57,716 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 070bdaa29426a2645f0b005a91b8c572, NAME => 'Group_testFailRemoveGroup,,1689743696604.070bdaa29426a2645f0b005a91b8c572.', STARTKEY => '', ENDKEY => ''} 2023-07-19 05:14:57,716 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testFailRemoveGroup 070bdaa29426a2645f0b005a91b8c572 2023-07-19 05:14:57,717 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testFailRemoveGroup,,1689743696604.070bdaa29426a2645f0b005a91b8c572.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-19 05:14:57,717 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 070bdaa29426a2645f0b005a91b8c572 2023-07-19 05:14:57,717 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 070bdaa29426a2645f0b005a91b8c572 2023-07-19 05:14:57,721 INFO [StoreOpener-070bdaa29426a2645f0b005a91b8c572-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 070bdaa29426a2645f0b005a91b8c572 2023-07-19 05:14:57,723 DEBUG [StoreOpener-070bdaa29426a2645f0b005a91b8c572-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/data/default/Group_testFailRemoveGroup/070bdaa29426a2645f0b005a91b8c572/f 2023-07-19 05:14:57,723 DEBUG [StoreOpener-070bdaa29426a2645f0b005a91b8c572-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/data/default/Group_testFailRemoveGroup/070bdaa29426a2645f0b005a91b8c572/f 2023-07-19 05:14:57,723 INFO [StoreOpener-070bdaa29426a2645f0b005a91b8c572-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 070bdaa29426a2645f0b005a91b8c572 columnFamilyName f 2023-07-19 05:14:57,724 INFO [StoreOpener-070bdaa29426a2645f0b005a91b8c572-1] regionserver.HStore(310): Store=070bdaa29426a2645f0b005a91b8c572/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-19 05:14:57,725 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/data/default/Group_testFailRemoveGroup/070bdaa29426a2645f0b005a91b8c572 2023-07-19 05:14:57,727 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/data/default/Group_testFailRemoveGroup/070bdaa29426a2645f0b005a91b8c572 2023-07-19 05:14:57,730 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 070bdaa29426a2645f0b005a91b8c572 2023-07-19 05:14:57,730 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 070bdaa29426a2645f0b005a91b8c572; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=12063874880, jitterRate=0.12353590130805969}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-19 05:14:57,731 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 070bdaa29426a2645f0b005a91b8c572: 2023-07-19 05:14:57,731 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testFailRemoveGroup,,1689743696604.070bdaa29426a2645f0b005a91b8c572., pid=89, masterSystemTime=1689743697710 2023-07-19 05:14:57,733 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testFailRemoveGroup,,1689743696604.070bdaa29426a2645f0b005a91b8c572. 2023-07-19 05:14:57,733 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testFailRemoveGroup,,1689743696604.070bdaa29426a2645f0b005a91b8c572. 2023-07-19 05:14:57,733 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=87 updating hbase:meta row=070bdaa29426a2645f0b005a91b8c572, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,41979,1689743683435 2023-07-19 05:14:57,734 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testFailRemoveGroup,,1689743696604.070bdaa29426a2645f0b005a91b8c572.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689743697733"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689743697733"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689743697733"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689743697733"}]},"ts":"1689743697733"} 2023-07-19 05:14:57,737 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=89, resume processing ppid=87 2023-07-19 05:14:57,737 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=89, ppid=87, state=SUCCESS; OpenRegionProcedure 070bdaa29426a2645f0b005a91b8c572, server=jenkins-hbase4.apache.org,41979,1689743683435 in 176 msec 2023-07-19 05:14:57,739 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=87, state=SUCCESS; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=070bdaa29426a2645f0b005a91b8c572, REOPEN/MOVE in 507 msec 2023-07-19 05:14:58,232 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] procedure.ProcedureSyncWait(216): waitFor pid=87 2023-07-19 05:14:58,232 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminServer(369): All regions from table(s) [Group_testFailRemoveGroup] moved to target group bar. 2023-07-19 05:14:58,233 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-19 05:14:58,236 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-19 05:14:58,236 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-19 05:14:58,239 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=bar 2023-07-19 05:14:58,239 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-19 05:14:58,240 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup bar 2023-07-19 05:14:58,240 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup bar has 1 tables; you must remove these tables from the rsgroup before the rsgroup can be removed. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.removeRSGroup(RSGroupAdminServer.java:490) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.removeRSGroup(RSGroupAdminEndpoint.java:278) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16208) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-19 05:14:58,240 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] ipc.CallRunner(144): callId: 287 service: MasterService methodName: ExecMasterService size: 85 connection: 172.31.14.131:46730 deadline: 1689744898240, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup bar has 1 tables; you must remove these tables from the rsgroup before the rsgroup can be removed. 2023-07-19 05:14:58,241 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:43237, jenkins-hbase4.apache.org:41899, jenkins-hbase4.apache.org:41979] to rsgroup default 2023-07-19 05:14:58,241 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Cannot leave a RSGroup bar that contains tables without servers to host them. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:428) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-19 05:14:58,241 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] ipc.CallRunner(144): callId: 289 service: MasterService methodName: ExecMasterService size: 188 connection: 172.31.14.131:46730 deadline: 1689744898241, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Cannot leave a RSGroup bar that contains tables without servers to host them. 2023-07-19 05:14:58,244 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [Group_testFailRemoveGroup] to rsgroup default 2023-07-19 05:14:58,246 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 05:14:58,246 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/bar 2023-07-19 05:14:58,247 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-19 05:14:58,247 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-19 05:14:58,249 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminServer(339): Moving region(s) for table Group_testFailRemoveGroup to RSGroup default 2023-07-19 05:14:58,249 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminServer(345): Moving region 070bdaa29426a2645f0b005a91b8c572 to RSGroup default 2023-07-19 05:14:58,249 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] procedure2.ProcedureExecutor(1029): Stored pid=90, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=070bdaa29426a2645f0b005a91b8c572, REOPEN/MOVE 2023-07-19 05:14:58,250 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminServer(286): Moving 1 region(s) to group default, current retry=0 2023-07-19 05:14:58,251 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=90, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=070bdaa29426a2645f0b005a91b8c572, REOPEN/MOVE 2023-07-19 05:14:58,251 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=90 updating hbase:meta row=070bdaa29426a2645f0b005a91b8c572, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,41979,1689743683435 2023-07-19 05:14:58,252 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testFailRemoveGroup,,1689743696604.070bdaa29426a2645f0b005a91b8c572.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689743698251"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689743698251"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689743698251"}]},"ts":"1689743698251"} 2023-07-19 05:14:58,253 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=91, ppid=90, state=RUNNABLE; CloseRegionProcedure 070bdaa29426a2645f0b005a91b8c572, server=jenkins-hbase4.apache.org,41979,1689743683435}] 2023-07-19 05:14:58,407 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 070bdaa29426a2645f0b005a91b8c572 2023-07-19 05:14:58,408 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 070bdaa29426a2645f0b005a91b8c572, disabling compactions & flushes 2023-07-19 05:14:58,408 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testFailRemoveGroup,,1689743696604.070bdaa29426a2645f0b005a91b8c572. 2023-07-19 05:14:58,408 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testFailRemoveGroup,,1689743696604.070bdaa29426a2645f0b005a91b8c572. 2023-07-19 05:14:58,408 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testFailRemoveGroup,,1689743696604.070bdaa29426a2645f0b005a91b8c572. after waiting 0 ms 2023-07-19 05:14:58,408 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testFailRemoveGroup,,1689743696604.070bdaa29426a2645f0b005a91b8c572. 2023-07-19 05:14:58,413 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/data/default/Group_testFailRemoveGroup/070bdaa29426a2645f0b005a91b8c572/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-19 05:14:58,414 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testFailRemoveGroup,,1689743696604.070bdaa29426a2645f0b005a91b8c572. 2023-07-19 05:14:58,414 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 070bdaa29426a2645f0b005a91b8c572: 2023-07-19 05:14:58,414 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 070bdaa29426a2645f0b005a91b8c572 move to jenkins-hbase4.apache.org,45681,1689743683028 record at close sequenceid=5 2023-07-19 05:14:58,416 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 070bdaa29426a2645f0b005a91b8c572 2023-07-19 05:14:58,417 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=90 updating hbase:meta row=070bdaa29426a2645f0b005a91b8c572, regionState=CLOSED 2023-07-19 05:14:58,417 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testFailRemoveGroup,,1689743696604.070bdaa29426a2645f0b005a91b8c572.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689743698417"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689743698417"}]},"ts":"1689743698417"} 2023-07-19 05:14:58,420 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=91, resume processing ppid=90 2023-07-19 05:14:58,420 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=91, ppid=90, state=SUCCESS; CloseRegionProcedure 070bdaa29426a2645f0b005a91b8c572, server=jenkins-hbase4.apache.org,41979,1689743683435 in 165 msec 2023-07-19 05:14:58,421 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=90, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=070bdaa29426a2645f0b005a91b8c572, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,45681,1689743683028; forceNewPlan=false, retain=false 2023-07-19 05:14:58,571 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=90 updating hbase:meta row=070bdaa29426a2645f0b005a91b8c572, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,45681,1689743683028 2023-07-19 05:14:58,571 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testFailRemoveGroup,,1689743696604.070bdaa29426a2645f0b005a91b8c572.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689743698571"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689743698571"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689743698571"}]},"ts":"1689743698571"} 2023-07-19 05:14:58,573 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=92, ppid=90, state=RUNNABLE; OpenRegionProcedure 070bdaa29426a2645f0b005a91b8c572, server=jenkins-hbase4.apache.org,45681,1689743683028}] 2023-07-19 05:14:58,729 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testFailRemoveGroup,,1689743696604.070bdaa29426a2645f0b005a91b8c572. 2023-07-19 05:14:58,729 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 070bdaa29426a2645f0b005a91b8c572, NAME => 'Group_testFailRemoveGroup,,1689743696604.070bdaa29426a2645f0b005a91b8c572.', STARTKEY => '', ENDKEY => ''} 2023-07-19 05:14:58,729 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testFailRemoveGroup 070bdaa29426a2645f0b005a91b8c572 2023-07-19 05:14:58,729 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testFailRemoveGroup,,1689743696604.070bdaa29426a2645f0b005a91b8c572.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-19 05:14:58,729 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 070bdaa29426a2645f0b005a91b8c572 2023-07-19 05:14:58,729 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 070bdaa29426a2645f0b005a91b8c572 2023-07-19 05:14:58,731 INFO [StoreOpener-070bdaa29426a2645f0b005a91b8c572-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 070bdaa29426a2645f0b005a91b8c572 2023-07-19 05:14:58,732 DEBUG [StoreOpener-070bdaa29426a2645f0b005a91b8c572-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/data/default/Group_testFailRemoveGroup/070bdaa29426a2645f0b005a91b8c572/f 2023-07-19 05:14:58,732 DEBUG [StoreOpener-070bdaa29426a2645f0b005a91b8c572-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/data/default/Group_testFailRemoveGroup/070bdaa29426a2645f0b005a91b8c572/f 2023-07-19 05:14:58,733 INFO [StoreOpener-070bdaa29426a2645f0b005a91b8c572-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 070bdaa29426a2645f0b005a91b8c572 columnFamilyName f 2023-07-19 05:14:58,735 INFO [StoreOpener-070bdaa29426a2645f0b005a91b8c572-1] regionserver.HStore(310): Store=070bdaa29426a2645f0b005a91b8c572/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-19 05:14:58,736 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/data/default/Group_testFailRemoveGroup/070bdaa29426a2645f0b005a91b8c572 2023-07-19 05:14:58,738 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/data/default/Group_testFailRemoveGroup/070bdaa29426a2645f0b005a91b8c572 2023-07-19 05:14:58,742 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 070bdaa29426a2645f0b005a91b8c572 2023-07-19 05:14:58,743 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 070bdaa29426a2645f0b005a91b8c572; next sequenceid=8; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10169420960, jitterRate=-0.052898868918418884}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-19 05:14:58,743 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 070bdaa29426a2645f0b005a91b8c572: 2023-07-19 05:14:58,744 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testFailRemoveGroup,,1689743696604.070bdaa29426a2645f0b005a91b8c572., pid=92, masterSystemTime=1689743698724 2023-07-19 05:14:58,745 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testFailRemoveGroup,,1689743696604.070bdaa29426a2645f0b005a91b8c572. 2023-07-19 05:14:58,745 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testFailRemoveGroup,,1689743696604.070bdaa29426a2645f0b005a91b8c572. 2023-07-19 05:14:58,746 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=90 updating hbase:meta row=070bdaa29426a2645f0b005a91b8c572, regionState=OPEN, openSeqNum=8, regionLocation=jenkins-hbase4.apache.org,45681,1689743683028 2023-07-19 05:14:58,746 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testFailRemoveGroup,,1689743696604.070bdaa29426a2645f0b005a91b8c572.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689743698746"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689743698746"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689743698746"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689743698746"}]},"ts":"1689743698746"} 2023-07-19 05:14:58,749 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=92, resume processing ppid=90 2023-07-19 05:14:58,749 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=92, ppid=90, state=SUCCESS; OpenRegionProcedure 070bdaa29426a2645f0b005a91b8c572, server=jenkins-hbase4.apache.org,45681,1689743683028 in 174 msec 2023-07-19 05:14:58,750 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=90, state=SUCCESS; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=070bdaa29426a2645f0b005a91b8c572, REOPEN/MOVE in 500 msec 2023-07-19 05:14:59,250 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] procedure.ProcedureSyncWait(216): waitFor pid=90 2023-07-19 05:14:59,250 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminServer(369): All regions from table(s) [Group_testFailRemoveGroup] moved to target group default. 2023-07-19 05:14:59,250 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-19 05:14:59,254 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-19 05:14:59,254 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-19 05:14:59,257 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup bar 2023-07-19 05:14:59,258 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup bar has 3 servers; you must remove these servers from the RSGroup beforethe RSGroup can be removed. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.removeRSGroup(RSGroupAdminServer.java:496) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.removeRSGroup(RSGroupAdminEndpoint.java:278) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16208) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-19 05:14:59,258 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] ipc.CallRunner(144): callId: 296 service: MasterService methodName: ExecMasterService size: 85 connection: 172.31.14.131:46730 deadline: 1689744899257, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup bar has 3 servers; you must remove these servers from the RSGroup beforethe RSGroup can be removed. 2023-07-19 05:14:59,261 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:43237, jenkins-hbase4.apache.org:41899, jenkins-hbase4.apache.org:41979] to rsgroup default 2023-07-19 05:14:59,263 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 05:14:59,264 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/bar 2023-07-19 05:14:59,264 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-19 05:14:59,265 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-19 05:14:59,267 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group bar, current retry=0 2023-07-19 05:14:59,267 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,41899,1689743683228, jenkins-hbase4.apache.org,41979,1689743683435, jenkins-hbase4.apache.org,43237,1689743687175] are moved back to bar 2023-07-19 05:14:59,267 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminServer(438): Move servers done: bar => default 2023-07-19 05:14:59,267 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-19 05:14:59,270 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-19 05:14:59,270 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-19 05:14:59,272 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup bar 2023-07-19 05:14:59,273 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=43237] ipc.CallRunner(144): callId: 220 service: ClientService methodName: Scan size: 147 connection: 172.31.14.131:57856 deadline: 1689743759272, exception=org.apache.hadoop.hbase.exceptions.RegionMovedException: Region moved to: hostname=jenkins-hbase4.apache.org port=45681 startCode=1689743683028. As of locationSeqNum=10. 2023-07-19 05:14:59,383 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 05:14:59,384 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-19 05:14:59,384 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-19 05:14:59,386 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-19 05:14:59,390 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-19 05:14:59,390 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-19 05:14:59,393 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-19 05:14:59,393 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-19 05:14:59,396 INFO [Listener at localhost/38799] client.HBaseAdmin$15(890): Started disable of Group_testFailRemoveGroup 2023-07-19 05:14:59,396 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable Group_testFailRemoveGroup 2023-07-19 05:14:59,397 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] procedure2.ProcedureExecutor(1029): Stored pid=93, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=Group_testFailRemoveGroup 2023-07-19 05:14:59,400 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.MasterRpcServices(1230): Checking to see if procedure is done pid=93 2023-07-19 05:14:59,401 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testFailRemoveGroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689743699401"}]},"ts":"1689743699401"} 2023-07-19 05:14:59,402 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=Group_testFailRemoveGroup, state=DISABLING in hbase:meta 2023-07-19 05:14:59,405 INFO [PEWorker-5] procedure.DisableTableProcedure(293): Set Group_testFailRemoveGroup to state=DISABLING 2023-07-19 05:14:59,406 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=94, ppid=93, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=070bdaa29426a2645f0b005a91b8c572, UNASSIGN}] 2023-07-19 05:14:59,408 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=94, ppid=93, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=070bdaa29426a2645f0b005a91b8c572, UNASSIGN 2023-07-19 05:14:59,411 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=94 updating hbase:meta row=070bdaa29426a2645f0b005a91b8c572, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,45681,1689743683028 2023-07-19 05:14:59,411 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testFailRemoveGroup,,1689743696604.070bdaa29426a2645f0b005a91b8c572.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689743699411"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689743699411"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689743699411"}]},"ts":"1689743699411"} 2023-07-19 05:14:59,412 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=95, ppid=94, state=RUNNABLE; CloseRegionProcedure 070bdaa29426a2645f0b005a91b8c572, server=jenkins-hbase4.apache.org,45681,1689743683028}] 2023-07-19 05:14:59,501 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.MasterRpcServices(1230): Checking to see if procedure is done pid=93 2023-07-19 05:14:59,565 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 070bdaa29426a2645f0b005a91b8c572 2023-07-19 05:14:59,566 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 070bdaa29426a2645f0b005a91b8c572, disabling compactions & flushes 2023-07-19 05:14:59,566 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testFailRemoveGroup,,1689743696604.070bdaa29426a2645f0b005a91b8c572. 2023-07-19 05:14:59,566 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testFailRemoveGroup,,1689743696604.070bdaa29426a2645f0b005a91b8c572. 2023-07-19 05:14:59,566 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testFailRemoveGroup,,1689743696604.070bdaa29426a2645f0b005a91b8c572. after waiting 0 ms 2023-07-19 05:14:59,566 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testFailRemoveGroup,,1689743696604.070bdaa29426a2645f0b005a91b8c572. 2023-07-19 05:14:59,570 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/data/default/Group_testFailRemoveGroup/070bdaa29426a2645f0b005a91b8c572/recovered.edits/10.seqid, newMaxSeqId=10, maxSeqId=7 2023-07-19 05:14:59,571 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testFailRemoveGroup,,1689743696604.070bdaa29426a2645f0b005a91b8c572. 2023-07-19 05:14:59,571 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 070bdaa29426a2645f0b005a91b8c572: 2023-07-19 05:14:59,573 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 070bdaa29426a2645f0b005a91b8c572 2023-07-19 05:14:59,573 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=94 updating hbase:meta row=070bdaa29426a2645f0b005a91b8c572, regionState=CLOSED 2023-07-19 05:14:59,573 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testFailRemoveGroup,,1689743696604.070bdaa29426a2645f0b005a91b8c572.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689743699573"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689743699573"}]},"ts":"1689743699573"} 2023-07-19 05:14:59,578 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=95, resume processing ppid=94 2023-07-19 05:14:59,578 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=95, ppid=94, state=SUCCESS; CloseRegionProcedure 070bdaa29426a2645f0b005a91b8c572, server=jenkins-hbase4.apache.org,45681,1689743683028 in 164 msec 2023-07-19 05:14:59,580 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=94, resume processing ppid=93 2023-07-19 05:14:59,580 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=94, ppid=93, state=SUCCESS; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=070bdaa29426a2645f0b005a91b8c572, UNASSIGN in 172 msec 2023-07-19 05:14:59,581 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testFailRemoveGroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689743699581"}]},"ts":"1689743699581"} 2023-07-19 05:14:59,582 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=Group_testFailRemoveGroup, state=DISABLED in hbase:meta 2023-07-19 05:14:59,584 INFO [PEWorker-5] procedure.DisableTableProcedure(305): Set Group_testFailRemoveGroup to state=DISABLED 2023-07-19 05:14:59,586 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=93, state=SUCCESS; DisableTableProcedure table=Group_testFailRemoveGroup in 189 msec 2023-07-19 05:14:59,703 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.MasterRpcServices(1230): Checking to see if procedure is done pid=93 2023-07-19 05:14:59,703 INFO [Listener at localhost/38799] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:Group_testFailRemoveGroup, procId: 93 completed 2023-07-19 05:14:59,704 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.HMaster$5(2228): Client=jenkins//172.31.14.131 delete Group_testFailRemoveGroup 2023-07-19 05:14:59,704 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] procedure2.ProcedureExecutor(1029): Stored pid=96, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=Group_testFailRemoveGroup 2023-07-19 05:14:59,706 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=96, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=Group_testFailRemoveGroup 2023-07-19 05:14:59,707 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 'Group_testFailRemoveGroup' from rsgroup 'default' 2023-07-19 05:14:59,707 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=96, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=Group_testFailRemoveGroup 2023-07-19 05:14:59,709 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 05:14:59,709 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-19 05:14:59,710 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-19 05:14:59,711 DEBUG [HFileArchiver-6] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/.tmp/data/default/Group_testFailRemoveGroup/070bdaa29426a2645f0b005a91b8c572 2023-07-19 05:14:59,712 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.MasterRpcServices(1230): Checking to see if procedure is done pid=96 2023-07-19 05:14:59,713 DEBUG [HFileArchiver-6] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/.tmp/data/default/Group_testFailRemoveGroup/070bdaa29426a2645f0b005a91b8c572/f, FileablePath, hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/.tmp/data/default/Group_testFailRemoveGroup/070bdaa29426a2645f0b005a91b8c572/recovered.edits] 2023-07-19 05:14:59,718 DEBUG [HFileArchiver-6] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/.tmp/data/default/Group_testFailRemoveGroup/070bdaa29426a2645f0b005a91b8c572/recovered.edits/10.seqid to hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/archive/data/default/Group_testFailRemoveGroup/070bdaa29426a2645f0b005a91b8c572/recovered.edits/10.seqid 2023-07-19 05:14:59,719 DEBUG [HFileArchiver-6] backup.HFileArchiver(596): Deleted hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/.tmp/data/default/Group_testFailRemoveGroup/070bdaa29426a2645f0b005a91b8c572 2023-07-19 05:14:59,719 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived Group_testFailRemoveGroup regions 2023-07-19 05:14:59,721 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=96, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=Group_testFailRemoveGroup 2023-07-19 05:14:59,723 WARN [PEWorker-3] procedure.DeleteTableProcedure(384): Deleting some vestigial 1 rows of Group_testFailRemoveGroup from hbase:meta 2023-07-19 05:14:59,726 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(421): Removing 'Group_testFailRemoveGroup' descriptor. 2023-07-19 05:14:59,727 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=96, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=Group_testFailRemoveGroup 2023-07-19 05:14:59,727 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(411): Removing 'Group_testFailRemoveGroup' from region states. 2023-07-19 05:14:59,727 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testFailRemoveGroup,,1689743696604.070bdaa29426a2645f0b005a91b8c572.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689743699727"}]},"ts":"9223372036854775807"} 2023-07-19 05:14:59,729 INFO [PEWorker-3] hbase.MetaTableAccessor(1788): Deleted 1 regions from META 2023-07-19 05:14:59,729 DEBUG [PEWorker-3] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => 070bdaa29426a2645f0b005a91b8c572, NAME => 'Group_testFailRemoveGroup,,1689743696604.070bdaa29426a2645f0b005a91b8c572.', STARTKEY => '', ENDKEY => ''}] 2023-07-19 05:14:59,729 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(415): Marking 'Group_testFailRemoveGroup' as deleted. 2023-07-19 05:14:59,729 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testFailRemoveGroup","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1689743699729"}]},"ts":"9223372036854775807"} 2023-07-19 05:14:59,730 INFO [PEWorker-3] hbase.MetaTableAccessor(1658): Deleted table Group_testFailRemoveGroup state from META 2023-07-19 05:14:59,732 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(130): Finished pid=96, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=Group_testFailRemoveGroup 2023-07-19 05:14:59,733 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=96, state=SUCCESS; DeleteTableProcedure table=Group_testFailRemoveGroup in 28 msec 2023-07-19 05:14:59,813 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.MasterRpcServices(1230): Checking to see if procedure is done pid=96 2023-07-19 05:14:59,814 INFO [Listener at localhost/38799] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: default:Group_testFailRemoveGroup, procId: 96 completed 2023-07-19 05:14:59,817 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-19 05:14:59,817 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-19 05:14:59,818 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-19 05:14:59,818 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-19 05:14:59,818 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-19 05:14:59,819 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-19 05:14:59,819 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-19 05:14:59,820 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-19 05:14:59,823 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 05:14:59,824 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-19 05:14:59,830 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-19 05:14:59,833 INFO [Listener at localhost/38799] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-19 05:14:59,834 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-19 05:14:59,836 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 05:14:59,837 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-19 05:14:59,838 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-19 05:14:59,839 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-19 05:14:59,842 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-19 05:14:59,842 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-19 05:14:59,844 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:35853] to rsgroup master 2023-07-19 05:14:59,844 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:35853 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-19 05:14:59,844 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] ipc.CallRunner(144): callId: 344 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:46730 deadline: 1689744899844, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:35853 is either offline or it does not exist. 2023-07-19 05:14:59,845 WARN [Listener at localhost/38799] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:35853 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:35853 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-19 05:14:59,846 INFO [Listener at localhost/38799] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-19 05:14:59,847 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-19 05:14:59,847 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-19 05:14:59,847 INFO [Listener at localhost/38799] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:41899, jenkins-hbase4.apache.org:41979, jenkins-hbase4.apache.org:43237, jenkins-hbase4.apache.org:45681], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-19 05:14:59,848 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-19 05:14:59,848 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-19 05:14:59,865 INFO [Listener at localhost/38799] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testFailRemoveGroup Thread=511 (was 509) Potentially hanging thread: hconnection-0x5b070797-shared-pool-9 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x5b070797-shared-pool-11 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x18ddab65-shared-pool-18 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x5b070797-shared-pool-10 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-573178852_17 at /127.0.0.1:57982 [Waiting for operation #6] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_156935409_17 at /127.0.0.1:40664 [Waiting for operation #5] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x5b070797-shared-pool-12 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x18ddab65-shared-pool-15 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1327810005_17 at /127.0.0.1:51446 [Waiting for operation #3] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1327810005_17 at /127.0.0.1:51356 [Waiting for operation #7] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x18ddab65-shared-pool-13 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x18ddab65-shared-pool-14 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x6043b73e-shared-pool-2 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x18ddab65-shared-pool-16 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x18ddab65-shared-pool-17 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=815 (was 814) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=350 (was 355), ProcessCount=173 (was 173), AvailableMemoryMB=3186 (was 3323) 2023-07-19 05:14:59,865 WARN [Listener at localhost/38799] hbase.ResourceChecker(130): Thread=511 is superior to 500 2023-07-19 05:14:59,881 INFO [Listener at localhost/38799] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testMultiTableMove Thread=511, OpenFileDescriptor=815, MaxFileDescriptor=60000, SystemLoadAverage=350, ProcessCount=173, AvailableMemoryMB=3186 2023-07-19 05:14:59,881 WARN [Listener at localhost/38799] hbase.ResourceChecker(130): Thread=511 is superior to 500 2023-07-19 05:14:59,881 INFO [Listener at localhost/38799] rsgroup.TestRSGroupsBase(132): testMultiTableMove 2023-07-19 05:14:59,886 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-19 05:14:59,886 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-19 05:14:59,887 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-19 05:14:59,887 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-19 05:14:59,887 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-19 05:14:59,887 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-19 05:14:59,887 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-19 05:14:59,888 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-19 05:14:59,892 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 05:14:59,892 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-19 05:14:59,894 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-19 05:14:59,897 INFO [Listener at localhost/38799] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-19 05:14:59,898 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-19 05:14:59,900 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 05:14:59,901 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-19 05:14:59,903 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-19 05:14:59,904 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-19 05:14:59,907 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-19 05:14:59,908 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-19 05:14:59,910 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:35853] to rsgroup master 2023-07-19 05:14:59,910 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:35853 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-19 05:14:59,910 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] ipc.CallRunner(144): callId: 372 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:46730 deadline: 1689744899910, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:35853 is either offline or it does not exist. 2023-07-19 05:14:59,911 WARN [Listener at localhost/38799] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:35853 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:35853 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-19 05:14:59,915 INFO [Listener at localhost/38799] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-19 05:14:59,916 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-19 05:14:59,916 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-19 05:14:59,916 INFO [Listener at localhost/38799] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:41899, jenkins-hbase4.apache.org:41979, jenkins-hbase4.apache.org:43237, jenkins-hbase4.apache.org:45681], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-19 05:14:59,917 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-19 05:14:59,917 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-19 05:14:59,918 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-19 05:14:59,919 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-19 05:14:59,919 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup Group_testMultiTableMove_272160960 2023-07-19 05:14:59,922 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_272160960 2023-07-19 05:14:59,923 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 05:14:59,924 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-19 05:14:59,924 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-19 05:14:59,927 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-19 05:14:59,932 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-19 05:14:59,932 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-19 05:14:59,936 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:41899] to rsgroup Group_testMultiTableMove_272160960 2023-07-19 05:14:59,939 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 05:14:59,939 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_272160960 2023-07-19 05:14:59,940 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-19 05:14:59,940 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-19 05:14:59,941 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group default, current retry=0 2023-07-19 05:14:59,941 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,41899,1689743683228] are moved back to default 2023-07-19 05:14:59,941 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminServer(438): Move servers done: default => Group_testMultiTableMove_272160960 2023-07-19 05:14:59,942 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-19 05:14:59,944 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-19 05:14:59,945 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-19 05:14:59,947 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=Group_testMultiTableMove_272160960 2023-07-19 05:14:59,947 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-19 05:14:59,949 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'GrouptestMultiTableMoveA', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-19 05:14:59,950 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] procedure2.ProcedureExecutor(1029): Stored pid=97, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=GrouptestMultiTableMoveA 2023-07-19 05:14:59,952 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=97, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveA execute state=CREATE_TABLE_PRE_OPERATION 2023-07-19 05:14:59,952 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "GrouptestMultiTableMoveA" procId is: 97 2023-07-19 05:14:59,953 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.MasterRpcServices(1230): Checking to see if procedure is done pid=97 2023-07-19 05:14:59,954 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 05:14:59,955 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_272160960 2023-07-19 05:14:59,955 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-19 05:14:59,955 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-19 05:14:59,960 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=97, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveA execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-19 05:14:59,962 DEBUG [HFileArchiver-7] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/.tmp/data/default/GrouptestMultiTableMoveA/23ada99ff478ca93f4b6bb0ba1556d72 2023-07-19 05:14:59,963 DEBUG [HFileArchiver-7] backup.HFileArchiver(153): Directory hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/.tmp/data/default/GrouptestMultiTableMoveA/23ada99ff478ca93f4b6bb0ba1556d72 empty. 2023-07-19 05:14:59,963 DEBUG [HFileArchiver-7] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/.tmp/data/default/GrouptestMultiTableMoveA/23ada99ff478ca93f4b6bb0ba1556d72 2023-07-19 05:14:59,963 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(328): Archived GrouptestMultiTableMoveA regions 2023-07-19 05:14:59,982 DEBUG [PEWorker-4] util.FSTableDescriptors(570): Wrote into hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/.tmp/data/default/GrouptestMultiTableMoveA/.tabledesc/.tableinfo.0000000001 2023-07-19 05:14:59,983 INFO [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(7675): creating {ENCODED => 23ada99ff478ca93f4b6bb0ba1556d72, NAME => 'GrouptestMultiTableMoveA,,1689743699949.23ada99ff478ca93f4b6bb0ba1556d72.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='GrouptestMultiTableMoveA', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/.tmp 2023-07-19 05:14:59,995 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(866): Instantiated GrouptestMultiTableMoveA,,1689743699949.23ada99ff478ca93f4b6bb0ba1556d72.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-19 05:14:59,996 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(1604): Closing 23ada99ff478ca93f4b6bb0ba1556d72, disabling compactions & flushes 2023-07-19 05:14:59,996 INFO [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(1626): Closing region GrouptestMultiTableMoveA,,1689743699949.23ada99ff478ca93f4b6bb0ba1556d72. 2023-07-19 05:14:59,996 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on GrouptestMultiTableMoveA,,1689743699949.23ada99ff478ca93f4b6bb0ba1556d72. 2023-07-19 05:14:59,996 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(1714): Acquired close lock on GrouptestMultiTableMoveA,,1689743699949.23ada99ff478ca93f4b6bb0ba1556d72. after waiting 0 ms 2023-07-19 05:14:59,996 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(1724): Updates disabled for region GrouptestMultiTableMoveA,,1689743699949.23ada99ff478ca93f4b6bb0ba1556d72. 2023-07-19 05:14:59,996 INFO [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(1838): Closed GrouptestMultiTableMoveA,,1689743699949.23ada99ff478ca93f4b6bb0ba1556d72. 2023-07-19 05:14:59,996 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(1558): Region close journal for 23ada99ff478ca93f4b6bb0ba1556d72: 2023-07-19 05:14:59,998 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=97, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveA execute state=CREATE_TABLE_ADD_TO_META 2023-07-19 05:14:59,999 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"GrouptestMultiTableMoveA,,1689743699949.23ada99ff478ca93f4b6bb0ba1556d72.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689743699999"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689743699999"}]},"ts":"1689743699999"} 2023-07-19 05:15:00,000 INFO [PEWorker-4] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-19 05:15:00,001 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=97, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveA execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-19 05:15:00,001 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveA","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689743700001"}]},"ts":"1689743700001"} 2023-07-19 05:15:00,002 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveA, state=ENABLING in hbase:meta 2023-07-19 05:15:00,006 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-19 05:15:00,006 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-19 05:15:00,006 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-19 05:15:00,006 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-19 05:15:00,006 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-19 05:15:00,006 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=98, ppid=97, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=23ada99ff478ca93f4b6bb0ba1556d72, ASSIGN}] 2023-07-19 05:15:00,008 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=98, ppid=97, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=23ada99ff478ca93f4b6bb0ba1556d72, ASSIGN 2023-07-19 05:15:00,009 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=98, ppid=97, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=23ada99ff478ca93f4b6bb0ba1556d72, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,45681,1689743683028; forceNewPlan=false, retain=false 2023-07-19 05:15:00,054 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.MasterRpcServices(1230): Checking to see if procedure is done pid=97 2023-07-19 05:15:00,159 INFO [jenkins-hbase4:35853] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-19 05:15:00,161 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=98 updating hbase:meta row=23ada99ff478ca93f4b6bb0ba1556d72, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,45681,1689743683028 2023-07-19 05:15:00,161 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveA,,1689743699949.23ada99ff478ca93f4b6bb0ba1556d72.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689743700160"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689743700160"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689743700160"}]},"ts":"1689743700160"} 2023-07-19 05:15:00,162 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=99, ppid=98, state=RUNNABLE; OpenRegionProcedure 23ada99ff478ca93f4b6bb0ba1556d72, server=jenkins-hbase4.apache.org,45681,1689743683028}] 2023-07-19 05:15:00,255 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.MasterRpcServices(1230): Checking to see if procedure is done pid=97 2023-07-19 05:15:00,318 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open GrouptestMultiTableMoveA,,1689743699949.23ada99ff478ca93f4b6bb0ba1556d72. 2023-07-19 05:15:00,319 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 23ada99ff478ca93f4b6bb0ba1556d72, NAME => 'GrouptestMultiTableMoveA,,1689743699949.23ada99ff478ca93f4b6bb0ba1556d72.', STARTKEY => '', ENDKEY => ''} 2023-07-19 05:15:00,319 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table GrouptestMultiTableMoveA 23ada99ff478ca93f4b6bb0ba1556d72 2023-07-19 05:15:00,319 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated GrouptestMultiTableMoveA,,1689743699949.23ada99ff478ca93f4b6bb0ba1556d72.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-19 05:15:00,319 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 23ada99ff478ca93f4b6bb0ba1556d72 2023-07-19 05:15:00,319 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 23ada99ff478ca93f4b6bb0ba1556d72 2023-07-19 05:15:00,321 INFO [StoreOpener-23ada99ff478ca93f4b6bb0ba1556d72-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 23ada99ff478ca93f4b6bb0ba1556d72 2023-07-19 05:15:00,323 DEBUG [StoreOpener-23ada99ff478ca93f4b6bb0ba1556d72-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/data/default/GrouptestMultiTableMoveA/23ada99ff478ca93f4b6bb0ba1556d72/f 2023-07-19 05:15:00,323 DEBUG [StoreOpener-23ada99ff478ca93f4b6bb0ba1556d72-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/data/default/GrouptestMultiTableMoveA/23ada99ff478ca93f4b6bb0ba1556d72/f 2023-07-19 05:15:00,323 INFO [StoreOpener-23ada99ff478ca93f4b6bb0ba1556d72-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 23ada99ff478ca93f4b6bb0ba1556d72 columnFamilyName f 2023-07-19 05:15:00,324 INFO [StoreOpener-23ada99ff478ca93f4b6bb0ba1556d72-1] regionserver.HStore(310): Store=23ada99ff478ca93f4b6bb0ba1556d72/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-19 05:15:00,325 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/data/default/GrouptestMultiTableMoveA/23ada99ff478ca93f4b6bb0ba1556d72 2023-07-19 05:15:00,326 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/data/default/GrouptestMultiTableMoveA/23ada99ff478ca93f4b6bb0ba1556d72 2023-07-19 05:15:00,329 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 23ada99ff478ca93f4b6bb0ba1556d72 2023-07-19 05:15:00,332 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/data/default/GrouptestMultiTableMoveA/23ada99ff478ca93f4b6bb0ba1556d72/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-19 05:15:00,333 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 23ada99ff478ca93f4b6bb0ba1556d72; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9750704160, jitterRate=-0.09189490973949432}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-19 05:15:00,333 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 23ada99ff478ca93f4b6bb0ba1556d72: 2023-07-19 05:15:00,334 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for GrouptestMultiTableMoveA,,1689743699949.23ada99ff478ca93f4b6bb0ba1556d72., pid=99, masterSystemTime=1689743700314 2023-07-19 05:15:00,336 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for GrouptestMultiTableMoveA,,1689743699949.23ada99ff478ca93f4b6bb0ba1556d72. 2023-07-19 05:15:00,336 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened GrouptestMultiTableMoveA,,1689743699949.23ada99ff478ca93f4b6bb0ba1556d72. 2023-07-19 05:15:00,336 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=98 updating hbase:meta row=23ada99ff478ca93f4b6bb0ba1556d72, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,45681,1689743683028 2023-07-19 05:15:00,337 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"GrouptestMultiTableMoveA,,1689743699949.23ada99ff478ca93f4b6bb0ba1556d72.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689743700336"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689743700336"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689743700336"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689743700336"}]},"ts":"1689743700336"} 2023-07-19 05:15:00,340 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=99, resume processing ppid=98 2023-07-19 05:15:00,340 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=99, ppid=98, state=SUCCESS; OpenRegionProcedure 23ada99ff478ca93f4b6bb0ba1556d72, server=jenkins-hbase4.apache.org,45681,1689743683028 in 176 msec 2023-07-19 05:15:00,342 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=98, resume processing ppid=97 2023-07-19 05:15:00,342 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=98, ppid=97, state=SUCCESS; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=23ada99ff478ca93f4b6bb0ba1556d72, ASSIGN in 334 msec 2023-07-19 05:15:00,342 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=97, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveA execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-19 05:15:00,342 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveA","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689743700342"}]},"ts":"1689743700342"} 2023-07-19 05:15:00,344 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveA, state=ENABLED in hbase:meta 2023-07-19 05:15:00,347 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=97, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveA execute state=CREATE_TABLE_POST_OPERATION 2023-07-19 05:15:00,349 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=97, state=SUCCESS; CreateTableProcedure table=GrouptestMultiTableMoveA in 398 msec 2023-07-19 05:15:00,556 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.MasterRpcServices(1230): Checking to see if procedure is done pid=97 2023-07-19 05:15:00,557 INFO [Listener at localhost/38799] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:GrouptestMultiTableMoveA, procId: 97 completed 2023-07-19 05:15:00,557 DEBUG [Listener at localhost/38799] hbase.HBaseTestingUtility(3430): Waiting until all regions of table GrouptestMultiTableMoveA get assigned. Timeout = 60000ms 2023-07-19 05:15:00,557 INFO [Listener at localhost/38799] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-19 05:15:00,562 INFO [Listener at localhost/38799] hbase.HBaseTestingUtility(3484): All regions for table GrouptestMultiTableMoveA assigned to meta. Checking AM states. 2023-07-19 05:15:00,562 INFO [Listener at localhost/38799] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-19 05:15:00,563 INFO [Listener at localhost/38799] hbase.HBaseTestingUtility(3504): All regions for table GrouptestMultiTableMoveA assigned. 2023-07-19 05:15:00,564 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'GrouptestMultiTableMoveB', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-19 05:15:00,566 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] procedure2.ProcedureExecutor(1029): Stored pid=100, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=GrouptestMultiTableMoveB 2023-07-19 05:15:00,568 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=100, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveB execute state=CREATE_TABLE_PRE_OPERATION 2023-07-19 05:15:00,568 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "GrouptestMultiTableMoveB" procId is: 100 2023-07-19 05:15:00,569 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.MasterRpcServices(1230): Checking to see if procedure is done pid=100 2023-07-19 05:15:00,571 DEBUG [PEWorker-1] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 05:15:00,571 DEBUG [PEWorker-1] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_272160960 2023-07-19 05:15:00,572 DEBUG [PEWorker-1] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-19 05:15:00,572 DEBUG [PEWorker-1] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-19 05:15:00,575 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=100, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveB execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-19 05:15:00,577 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/.tmp/data/default/GrouptestMultiTableMoveB/9369c69453d5a5f496a014a4557b0be4 2023-07-19 05:15:00,577 DEBUG [HFileArchiver-3] backup.HFileArchiver(153): Directory hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/.tmp/data/default/GrouptestMultiTableMoveB/9369c69453d5a5f496a014a4557b0be4 empty. 2023-07-19 05:15:00,578 DEBUG [HFileArchiver-3] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/.tmp/data/default/GrouptestMultiTableMoveB/9369c69453d5a5f496a014a4557b0be4 2023-07-19 05:15:00,578 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(328): Archived GrouptestMultiTableMoveB regions 2023-07-19 05:15:00,771 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.MasterRpcServices(1230): Checking to see if procedure is done pid=100 2023-07-19 05:15:00,973 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.MasterRpcServices(1230): Checking to see if procedure is done pid=100 2023-07-19 05:15:01,198 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/.tmp/data/default/GrouptestMultiTableMoveB/.tabledesc/.tableinfo.0000000001 2023-07-19 05:15:01,200 INFO [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(7675): creating {ENCODED => 9369c69453d5a5f496a014a4557b0be4, NAME => 'GrouptestMultiTableMoveB,,1689743700564.9369c69453d5a5f496a014a4557b0be4.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='GrouptestMultiTableMoveB', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/.tmp 2023-07-19 05:15:01,216 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(866): Instantiated GrouptestMultiTableMoveB,,1689743700564.9369c69453d5a5f496a014a4557b0be4.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-19 05:15:01,216 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(1604): Closing 9369c69453d5a5f496a014a4557b0be4, disabling compactions & flushes 2023-07-19 05:15:01,216 INFO [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(1626): Closing region GrouptestMultiTableMoveB,,1689743700564.9369c69453d5a5f496a014a4557b0be4. 2023-07-19 05:15:01,216 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on GrouptestMultiTableMoveB,,1689743700564.9369c69453d5a5f496a014a4557b0be4. 2023-07-19 05:15:01,216 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(1714): Acquired close lock on GrouptestMultiTableMoveB,,1689743700564.9369c69453d5a5f496a014a4557b0be4. after waiting 0 ms 2023-07-19 05:15:01,216 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(1724): Updates disabled for region GrouptestMultiTableMoveB,,1689743700564.9369c69453d5a5f496a014a4557b0be4. 2023-07-19 05:15:01,216 INFO [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(1838): Closed GrouptestMultiTableMoveB,,1689743700564.9369c69453d5a5f496a014a4557b0be4. 2023-07-19 05:15:01,216 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(1558): Region close journal for 9369c69453d5a5f496a014a4557b0be4: 2023-07-19 05:15:01,219 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=100, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveB execute state=CREATE_TABLE_ADD_TO_META 2023-07-19 05:15:01,220 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"GrouptestMultiTableMoveB,,1689743700564.9369c69453d5a5f496a014a4557b0be4.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689743701220"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689743701220"}]},"ts":"1689743701220"} 2023-07-19 05:15:01,221 INFO [PEWorker-1] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-19 05:15:01,222 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=100, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveB execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-19 05:15:01,222 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveB","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689743701222"}]},"ts":"1689743701222"} 2023-07-19 05:15:01,223 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveB, state=ENABLING in hbase:meta 2023-07-19 05:15:01,226 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-19 05:15:01,227 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-19 05:15:01,227 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-19 05:15:01,227 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-19 05:15:01,227 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-19 05:15:01,227 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=101, ppid=100, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=9369c69453d5a5f496a014a4557b0be4, ASSIGN}] 2023-07-19 05:15:01,229 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=101, ppid=100, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=9369c69453d5a5f496a014a4557b0be4, ASSIGN 2023-07-19 05:15:01,230 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=101, ppid=100, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=9369c69453d5a5f496a014a4557b0be4, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,43237,1689743687175; forceNewPlan=false, retain=false 2023-07-19 05:15:01,274 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.MasterRpcServices(1230): Checking to see if procedure is done pid=100 2023-07-19 05:15:01,380 INFO [jenkins-hbase4:35853] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-19 05:15:01,381 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=101 updating hbase:meta row=9369c69453d5a5f496a014a4557b0be4, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,43237,1689743687175 2023-07-19 05:15:01,382 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveB,,1689743700564.9369c69453d5a5f496a014a4557b0be4.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689743701381"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689743701381"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689743701381"}]},"ts":"1689743701381"} 2023-07-19 05:15:01,384 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=102, ppid=101, state=RUNNABLE; OpenRegionProcedure 9369c69453d5a5f496a014a4557b0be4, server=jenkins-hbase4.apache.org,43237,1689743687175}] 2023-07-19 05:15:01,541 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open GrouptestMultiTableMoveB,,1689743700564.9369c69453d5a5f496a014a4557b0be4. 2023-07-19 05:15:01,541 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 9369c69453d5a5f496a014a4557b0be4, NAME => 'GrouptestMultiTableMoveB,,1689743700564.9369c69453d5a5f496a014a4557b0be4.', STARTKEY => '', ENDKEY => ''} 2023-07-19 05:15:01,541 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table GrouptestMultiTableMoveB 9369c69453d5a5f496a014a4557b0be4 2023-07-19 05:15:01,541 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated GrouptestMultiTableMoveB,,1689743700564.9369c69453d5a5f496a014a4557b0be4.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-19 05:15:01,542 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 9369c69453d5a5f496a014a4557b0be4 2023-07-19 05:15:01,542 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 9369c69453d5a5f496a014a4557b0be4 2023-07-19 05:15:01,543 INFO [StoreOpener-9369c69453d5a5f496a014a4557b0be4-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 9369c69453d5a5f496a014a4557b0be4 2023-07-19 05:15:01,545 DEBUG [StoreOpener-9369c69453d5a5f496a014a4557b0be4-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/data/default/GrouptestMultiTableMoveB/9369c69453d5a5f496a014a4557b0be4/f 2023-07-19 05:15:01,545 DEBUG [StoreOpener-9369c69453d5a5f496a014a4557b0be4-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/data/default/GrouptestMultiTableMoveB/9369c69453d5a5f496a014a4557b0be4/f 2023-07-19 05:15:01,546 INFO [StoreOpener-9369c69453d5a5f496a014a4557b0be4-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 9369c69453d5a5f496a014a4557b0be4 columnFamilyName f 2023-07-19 05:15:01,546 INFO [StoreOpener-9369c69453d5a5f496a014a4557b0be4-1] regionserver.HStore(310): Store=9369c69453d5a5f496a014a4557b0be4/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-19 05:15:01,547 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/data/default/GrouptestMultiTableMoveB/9369c69453d5a5f496a014a4557b0be4 2023-07-19 05:15:01,548 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/data/default/GrouptestMultiTableMoveB/9369c69453d5a5f496a014a4557b0be4 2023-07-19 05:15:01,550 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 9369c69453d5a5f496a014a4557b0be4 2023-07-19 05:15:01,554 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/data/default/GrouptestMultiTableMoveB/9369c69453d5a5f496a014a4557b0be4/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-19 05:15:01,555 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 9369c69453d5a5f496a014a4557b0be4; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10798985280, jitterRate=0.005733877420425415}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-19 05:15:01,555 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 9369c69453d5a5f496a014a4557b0be4: 2023-07-19 05:15:01,556 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for GrouptestMultiTableMoveB,,1689743700564.9369c69453d5a5f496a014a4557b0be4., pid=102, masterSystemTime=1689743701536 2023-07-19 05:15:01,557 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for GrouptestMultiTableMoveB,,1689743700564.9369c69453d5a5f496a014a4557b0be4. 2023-07-19 05:15:01,558 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened GrouptestMultiTableMoveB,,1689743700564.9369c69453d5a5f496a014a4557b0be4. 2023-07-19 05:15:01,558 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=101 updating hbase:meta row=9369c69453d5a5f496a014a4557b0be4, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,43237,1689743687175 2023-07-19 05:15:01,558 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"GrouptestMultiTableMoveB,,1689743700564.9369c69453d5a5f496a014a4557b0be4.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689743701558"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689743701558"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689743701558"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689743701558"}]},"ts":"1689743701558"} 2023-07-19 05:15:01,561 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=102, resume processing ppid=101 2023-07-19 05:15:01,561 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=102, ppid=101, state=SUCCESS; OpenRegionProcedure 9369c69453d5a5f496a014a4557b0be4, server=jenkins-hbase4.apache.org,43237,1689743687175 in 175 msec 2023-07-19 05:15:01,563 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=101, resume processing ppid=100 2023-07-19 05:15:01,563 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=101, ppid=100, state=SUCCESS; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=9369c69453d5a5f496a014a4557b0be4, ASSIGN in 334 msec 2023-07-19 05:15:01,564 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=100, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveB execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-19 05:15:01,564 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveB","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689743701564"}]},"ts":"1689743701564"} 2023-07-19 05:15:01,565 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveB, state=ENABLED in hbase:meta 2023-07-19 05:15:01,573 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=100, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveB execute state=CREATE_TABLE_POST_OPERATION 2023-07-19 05:15:01,575 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=100, state=SUCCESS; CreateTableProcedure table=GrouptestMultiTableMoveB in 1.0080 sec 2023-07-19 05:15:01,589 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-hbase.properties,hadoop-metrics2.properties 2023-07-19 05:15:01,776 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.MasterRpcServices(1230): Checking to see if procedure is done pid=100 2023-07-19 05:15:01,776 INFO [Listener at localhost/38799] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:GrouptestMultiTableMoveB, procId: 100 completed 2023-07-19 05:15:01,776 DEBUG [Listener at localhost/38799] hbase.HBaseTestingUtility(3430): Waiting until all regions of table GrouptestMultiTableMoveB get assigned. Timeout = 60000ms 2023-07-19 05:15:01,776 INFO [Listener at localhost/38799] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-19 05:15:01,786 INFO [Listener at localhost/38799] hbase.HBaseTestingUtility(3484): All regions for table GrouptestMultiTableMoveB assigned to meta. Checking AM states. 2023-07-19 05:15:01,786 INFO [Listener at localhost/38799] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-19 05:15:01,787 INFO [Listener at localhost/38799] hbase.HBaseTestingUtility(3504): All regions for table GrouptestMultiTableMoveB assigned. 2023-07-19 05:15:01,787 INFO [Listener at localhost/38799] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-19 05:15:01,810 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=GrouptestMultiTableMoveA 2023-07-19 05:15:01,810 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-19 05:15:01,811 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=GrouptestMultiTableMoveB 2023-07-19 05:15:01,812 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-19 05:15:01,812 INFO [Listener at localhost/38799] rsgroup.TestRSGroupsAdmin1(262): Moving table [GrouptestMultiTableMoveA,GrouptestMultiTableMoveB] to Group_testMultiTableMove_272160960 2023-07-19 05:15:01,816 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [GrouptestMultiTableMoveB, GrouptestMultiTableMoveA] to rsgroup Group_testMultiTableMove_272160960 2023-07-19 05:15:01,819 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 05:15:01,819 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_272160960 2023-07-19 05:15:01,820 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-19 05:15:01,820 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-19 05:15:01,822 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminServer(339): Moving region(s) for table GrouptestMultiTableMoveB to RSGroup Group_testMultiTableMove_272160960 2023-07-19 05:15:01,822 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminServer(345): Moving region 9369c69453d5a5f496a014a4557b0be4 to RSGroup Group_testMultiTableMove_272160960 2023-07-19 05:15:01,823 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] procedure2.ProcedureExecutor(1029): Stored pid=103, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=9369c69453d5a5f496a014a4557b0be4, REOPEN/MOVE 2023-07-19 05:15:01,823 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminServer(339): Moving region(s) for table GrouptestMultiTableMoveA to RSGroup Group_testMultiTableMove_272160960 2023-07-19 05:15:01,827 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminServer(345): Moving region 23ada99ff478ca93f4b6bb0ba1556d72 to RSGroup Group_testMultiTableMove_272160960 2023-07-19 05:15:01,825 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=103, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=9369c69453d5a5f496a014a4557b0be4, REOPEN/MOVE 2023-07-19 05:15:01,828 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] procedure2.ProcedureExecutor(1029): Stored pid=104, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=23ada99ff478ca93f4b6bb0ba1556d72, REOPEN/MOVE 2023-07-19 05:15:01,828 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminServer(286): Moving 2 region(s) to group Group_testMultiTableMove_272160960, current retry=0 2023-07-19 05:15:01,832 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=104, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=23ada99ff478ca93f4b6bb0ba1556d72, REOPEN/MOVE 2023-07-19 05:15:01,832 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=103 updating hbase:meta row=9369c69453d5a5f496a014a4557b0be4, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,43237,1689743687175 2023-07-19 05:15:01,832 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=104 updating hbase:meta row=23ada99ff478ca93f4b6bb0ba1556d72, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,45681,1689743683028 2023-07-19 05:15:01,832 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveB,,1689743700564.9369c69453d5a5f496a014a4557b0be4.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689743701832"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689743701832"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689743701832"}]},"ts":"1689743701832"} 2023-07-19 05:15:01,832 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveA,,1689743699949.23ada99ff478ca93f4b6bb0ba1556d72.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689743701832"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689743701832"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689743701832"}]},"ts":"1689743701832"} 2023-07-19 05:15:01,834 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=105, ppid=103, state=RUNNABLE; CloseRegionProcedure 9369c69453d5a5f496a014a4557b0be4, server=jenkins-hbase4.apache.org,43237,1689743687175}] 2023-07-19 05:15:01,840 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=106, ppid=104, state=RUNNABLE; CloseRegionProcedure 23ada99ff478ca93f4b6bb0ba1556d72, server=jenkins-hbase4.apache.org,45681,1689743683028}] 2023-07-19 05:15:01,987 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 9369c69453d5a5f496a014a4557b0be4 2023-07-19 05:15:01,988 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 9369c69453d5a5f496a014a4557b0be4, disabling compactions & flushes 2023-07-19 05:15:01,988 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region GrouptestMultiTableMoveB,,1689743700564.9369c69453d5a5f496a014a4557b0be4. 2023-07-19 05:15:01,988 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on GrouptestMultiTableMoveB,,1689743700564.9369c69453d5a5f496a014a4557b0be4. 2023-07-19 05:15:01,988 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on GrouptestMultiTableMoveB,,1689743700564.9369c69453d5a5f496a014a4557b0be4. after waiting 0 ms 2023-07-19 05:15:01,988 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region GrouptestMultiTableMoveB,,1689743700564.9369c69453d5a5f496a014a4557b0be4. 2023-07-19 05:15:01,992 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 23ada99ff478ca93f4b6bb0ba1556d72 2023-07-19 05:15:01,992 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/data/default/GrouptestMultiTableMoveB/9369c69453d5a5f496a014a4557b0be4/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-19 05:15:01,993 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 23ada99ff478ca93f4b6bb0ba1556d72, disabling compactions & flushes 2023-07-19 05:15:01,993 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region GrouptestMultiTableMoveA,,1689743699949.23ada99ff478ca93f4b6bb0ba1556d72. 2023-07-19 05:15:01,993 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on GrouptestMultiTableMoveA,,1689743699949.23ada99ff478ca93f4b6bb0ba1556d72. 2023-07-19 05:15:01,993 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on GrouptestMultiTableMoveA,,1689743699949.23ada99ff478ca93f4b6bb0ba1556d72. after waiting 0 ms 2023-07-19 05:15:01,993 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region GrouptestMultiTableMoveA,,1689743699949.23ada99ff478ca93f4b6bb0ba1556d72. 2023-07-19 05:15:01,995 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed GrouptestMultiTableMoveB,,1689743700564.9369c69453d5a5f496a014a4557b0be4. 2023-07-19 05:15:01,995 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 9369c69453d5a5f496a014a4557b0be4: 2023-07-19 05:15:01,995 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 9369c69453d5a5f496a014a4557b0be4 move to jenkins-hbase4.apache.org,41899,1689743683228 record at close sequenceid=2 2023-07-19 05:15:01,996 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 9369c69453d5a5f496a014a4557b0be4 2023-07-19 05:15:01,997 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=103 updating hbase:meta row=9369c69453d5a5f496a014a4557b0be4, regionState=CLOSED 2023-07-19 05:15:01,997 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"GrouptestMultiTableMoveB,,1689743700564.9369c69453d5a5f496a014a4557b0be4.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689743701997"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689743701997"}]},"ts":"1689743701997"} 2023-07-19 05:15:01,998 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/data/default/GrouptestMultiTableMoveA/23ada99ff478ca93f4b6bb0ba1556d72/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-19 05:15:02,000 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed GrouptestMultiTableMoveA,,1689743699949.23ada99ff478ca93f4b6bb0ba1556d72. 2023-07-19 05:15:02,000 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 23ada99ff478ca93f4b6bb0ba1556d72: 2023-07-19 05:15:02,000 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 23ada99ff478ca93f4b6bb0ba1556d72 move to jenkins-hbase4.apache.org,41899,1689743683228 record at close sequenceid=2 2023-07-19 05:15:02,001 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=105, resume processing ppid=103 2023-07-19 05:15:02,001 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=105, ppid=103, state=SUCCESS; CloseRegionProcedure 9369c69453d5a5f496a014a4557b0be4, server=jenkins-hbase4.apache.org,43237,1689743687175 in 164 msec 2023-07-19 05:15:02,002 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 23ada99ff478ca93f4b6bb0ba1556d72 2023-07-19 05:15:02,002 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=104 updating hbase:meta row=23ada99ff478ca93f4b6bb0ba1556d72, regionState=CLOSED 2023-07-19 05:15:02,002 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"GrouptestMultiTableMoveA,,1689743699949.23ada99ff478ca93f4b6bb0ba1556d72.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689743702001"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689743702001"}]},"ts":"1689743702001"} 2023-07-19 05:15:02,002 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=103, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=9369c69453d5a5f496a014a4557b0be4, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,41899,1689743683228; forceNewPlan=false, retain=false 2023-07-19 05:15:02,006 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=106, resume processing ppid=104 2023-07-19 05:15:02,006 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=106, ppid=104, state=SUCCESS; CloseRegionProcedure 23ada99ff478ca93f4b6bb0ba1556d72, server=jenkins-hbase4.apache.org,45681,1689743683028 in 164 msec 2023-07-19 05:15:02,007 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=104, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=23ada99ff478ca93f4b6bb0ba1556d72, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,41899,1689743683228; forceNewPlan=false, retain=false 2023-07-19 05:15:02,153 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=103 updating hbase:meta row=9369c69453d5a5f496a014a4557b0be4, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,41899,1689743683228 2023-07-19 05:15:02,153 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=104 updating hbase:meta row=23ada99ff478ca93f4b6bb0ba1556d72, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,41899,1689743683228 2023-07-19 05:15:02,153 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveB,,1689743700564.9369c69453d5a5f496a014a4557b0be4.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689743702153"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689743702153"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689743702153"}]},"ts":"1689743702153"} 2023-07-19 05:15:02,153 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveA,,1689743699949.23ada99ff478ca93f4b6bb0ba1556d72.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689743702153"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689743702153"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689743702153"}]},"ts":"1689743702153"} 2023-07-19 05:15:02,155 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=107, ppid=103, state=RUNNABLE; OpenRegionProcedure 9369c69453d5a5f496a014a4557b0be4, server=jenkins-hbase4.apache.org,41899,1689743683228}] 2023-07-19 05:15:02,155 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=108, ppid=104, state=RUNNABLE; OpenRegionProcedure 23ada99ff478ca93f4b6bb0ba1556d72, server=jenkins-hbase4.apache.org,41899,1689743683228}] 2023-07-19 05:15:02,310 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open GrouptestMultiTableMoveB,,1689743700564.9369c69453d5a5f496a014a4557b0be4. 2023-07-19 05:15:02,310 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 9369c69453d5a5f496a014a4557b0be4, NAME => 'GrouptestMultiTableMoveB,,1689743700564.9369c69453d5a5f496a014a4557b0be4.', STARTKEY => '', ENDKEY => ''} 2023-07-19 05:15:02,310 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table GrouptestMultiTableMoveB 9369c69453d5a5f496a014a4557b0be4 2023-07-19 05:15:02,311 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated GrouptestMultiTableMoveB,,1689743700564.9369c69453d5a5f496a014a4557b0be4.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-19 05:15:02,311 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 9369c69453d5a5f496a014a4557b0be4 2023-07-19 05:15:02,311 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 9369c69453d5a5f496a014a4557b0be4 2023-07-19 05:15:02,312 INFO [StoreOpener-9369c69453d5a5f496a014a4557b0be4-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 9369c69453d5a5f496a014a4557b0be4 2023-07-19 05:15:02,313 DEBUG [StoreOpener-9369c69453d5a5f496a014a4557b0be4-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/data/default/GrouptestMultiTableMoveB/9369c69453d5a5f496a014a4557b0be4/f 2023-07-19 05:15:02,313 DEBUG [StoreOpener-9369c69453d5a5f496a014a4557b0be4-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/data/default/GrouptestMultiTableMoveB/9369c69453d5a5f496a014a4557b0be4/f 2023-07-19 05:15:02,314 INFO [StoreOpener-9369c69453d5a5f496a014a4557b0be4-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 9369c69453d5a5f496a014a4557b0be4 columnFamilyName f 2023-07-19 05:15:02,314 INFO [StoreOpener-9369c69453d5a5f496a014a4557b0be4-1] regionserver.HStore(310): Store=9369c69453d5a5f496a014a4557b0be4/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-19 05:15:02,315 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/data/default/GrouptestMultiTableMoveB/9369c69453d5a5f496a014a4557b0be4 2023-07-19 05:15:02,317 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/data/default/GrouptestMultiTableMoveB/9369c69453d5a5f496a014a4557b0be4 2023-07-19 05:15:02,335 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 9369c69453d5a5f496a014a4557b0be4 2023-07-19 05:15:02,336 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 9369c69453d5a5f496a014a4557b0be4; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11667472480, jitterRate=0.08661805093288422}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-19 05:15:02,336 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 9369c69453d5a5f496a014a4557b0be4: 2023-07-19 05:15:02,337 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for GrouptestMultiTableMoveB,,1689743700564.9369c69453d5a5f496a014a4557b0be4., pid=107, masterSystemTime=1689743702306 2023-07-19 05:15:02,339 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for GrouptestMultiTableMoveB,,1689743700564.9369c69453d5a5f496a014a4557b0be4. 2023-07-19 05:15:02,339 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened GrouptestMultiTableMoveB,,1689743700564.9369c69453d5a5f496a014a4557b0be4. 2023-07-19 05:15:02,339 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open GrouptestMultiTableMoveA,,1689743699949.23ada99ff478ca93f4b6bb0ba1556d72. 2023-07-19 05:15:02,340 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 23ada99ff478ca93f4b6bb0ba1556d72, NAME => 'GrouptestMultiTableMoveA,,1689743699949.23ada99ff478ca93f4b6bb0ba1556d72.', STARTKEY => '', ENDKEY => ''} 2023-07-19 05:15:02,340 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=103 updating hbase:meta row=9369c69453d5a5f496a014a4557b0be4, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,41899,1689743683228 2023-07-19 05:15:02,340 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table GrouptestMultiTableMoveA 23ada99ff478ca93f4b6bb0ba1556d72 2023-07-19 05:15:02,340 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"GrouptestMultiTableMoveB,,1689743700564.9369c69453d5a5f496a014a4557b0be4.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689743702340"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689743702340"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689743702340"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689743702340"}]},"ts":"1689743702340"} 2023-07-19 05:15:02,340 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated GrouptestMultiTableMoveA,,1689743699949.23ada99ff478ca93f4b6bb0ba1556d72.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-19 05:15:02,340 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 23ada99ff478ca93f4b6bb0ba1556d72 2023-07-19 05:15:02,340 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 23ada99ff478ca93f4b6bb0ba1556d72 2023-07-19 05:15:02,342 INFO [StoreOpener-23ada99ff478ca93f4b6bb0ba1556d72-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 23ada99ff478ca93f4b6bb0ba1556d72 2023-07-19 05:15:02,345 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=107, resume processing ppid=103 2023-07-19 05:15:02,345 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=107, ppid=103, state=SUCCESS; OpenRegionProcedure 9369c69453d5a5f496a014a4557b0be4, server=jenkins-hbase4.apache.org,41899,1689743683228 in 188 msec 2023-07-19 05:15:02,347 DEBUG [StoreOpener-23ada99ff478ca93f4b6bb0ba1556d72-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/data/default/GrouptestMultiTableMoveA/23ada99ff478ca93f4b6bb0ba1556d72/f 2023-07-19 05:15:02,347 DEBUG [StoreOpener-23ada99ff478ca93f4b6bb0ba1556d72-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/data/default/GrouptestMultiTableMoveA/23ada99ff478ca93f4b6bb0ba1556d72/f 2023-07-19 05:15:02,347 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=103, state=SUCCESS; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=9369c69453d5a5f496a014a4557b0be4, REOPEN/MOVE in 523 msec 2023-07-19 05:15:02,347 INFO [StoreOpener-23ada99ff478ca93f4b6bb0ba1556d72-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 23ada99ff478ca93f4b6bb0ba1556d72 columnFamilyName f 2023-07-19 05:15:02,348 INFO [StoreOpener-23ada99ff478ca93f4b6bb0ba1556d72-1] regionserver.HStore(310): Store=23ada99ff478ca93f4b6bb0ba1556d72/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-19 05:15:02,349 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/data/default/GrouptestMultiTableMoveA/23ada99ff478ca93f4b6bb0ba1556d72 2023-07-19 05:15:02,350 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/data/default/GrouptestMultiTableMoveA/23ada99ff478ca93f4b6bb0ba1556d72 2023-07-19 05:15:02,353 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 23ada99ff478ca93f4b6bb0ba1556d72 2023-07-19 05:15:02,353 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 23ada99ff478ca93f4b6bb0ba1556d72; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11268424640, jitterRate=0.04945382475852966}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-19 05:15:02,353 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 23ada99ff478ca93f4b6bb0ba1556d72: 2023-07-19 05:15:02,354 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for GrouptestMultiTableMoveA,,1689743699949.23ada99ff478ca93f4b6bb0ba1556d72., pid=108, masterSystemTime=1689743702306 2023-07-19 05:15:02,355 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for GrouptestMultiTableMoveA,,1689743699949.23ada99ff478ca93f4b6bb0ba1556d72. 2023-07-19 05:15:02,355 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened GrouptestMultiTableMoveA,,1689743699949.23ada99ff478ca93f4b6bb0ba1556d72. 2023-07-19 05:15:02,355 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=104 updating hbase:meta row=23ada99ff478ca93f4b6bb0ba1556d72, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,41899,1689743683228 2023-07-19 05:15:02,356 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"GrouptestMultiTableMoveA,,1689743699949.23ada99ff478ca93f4b6bb0ba1556d72.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689743702355"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689743702355"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689743702355"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689743702355"}]},"ts":"1689743702355"} 2023-07-19 05:15:02,359 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=108, resume processing ppid=104 2023-07-19 05:15:02,359 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=108, ppid=104, state=SUCCESS; OpenRegionProcedure 23ada99ff478ca93f4b6bb0ba1556d72, server=jenkins-hbase4.apache.org,41899,1689743683228 in 202 msec 2023-07-19 05:15:02,361 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=104, state=SUCCESS; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=23ada99ff478ca93f4b6bb0ba1556d72, REOPEN/MOVE in 532 msec 2023-07-19 05:15:02,832 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] procedure.ProcedureSyncWait(216): waitFor pid=103 2023-07-19 05:15:02,832 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminServer(369): All regions from table(s) [GrouptestMultiTableMoveB, GrouptestMultiTableMoveA] moved to target group Group_testMultiTableMove_272160960. 2023-07-19 05:15:02,832 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-19 05:15:02,835 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-19 05:15:02,836 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-19 05:15:02,838 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=GrouptestMultiTableMoveA 2023-07-19 05:15:02,838 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-19 05:15:02,839 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=GrouptestMultiTableMoveB 2023-07-19 05:15:02,839 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-19 05:15:02,840 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-19 05:15:02,840 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-19 05:15:02,841 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=Group_testMultiTableMove_272160960 2023-07-19 05:15:02,841 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-19 05:15:02,843 INFO [Listener at localhost/38799] client.HBaseAdmin$15(890): Started disable of GrouptestMultiTableMoveA 2023-07-19 05:15:02,843 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable GrouptestMultiTableMoveA 2023-07-19 05:15:02,844 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] procedure2.ProcedureExecutor(1029): Stored pid=109, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=GrouptestMultiTableMoveA 2023-07-19 05:15:02,847 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.MasterRpcServices(1230): Checking to see if procedure is done pid=109 2023-07-19 05:15:02,847 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveA","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689743702847"}]},"ts":"1689743702847"} 2023-07-19 05:15:02,849 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveA, state=DISABLING in hbase:meta 2023-07-19 05:15:02,851 INFO [PEWorker-3] procedure.DisableTableProcedure(293): Set GrouptestMultiTableMoveA to state=DISABLING 2023-07-19 05:15:02,851 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=110, ppid=109, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=23ada99ff478ca93f4b6bb0ba1556d72, UNASSIGN}] 2023-07-19 05:15:02,853 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=110, ppid=109, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=23ada99ff478ca93f4b6bb0ba1556d72, UNASSIGN 2023-07-19 05:15:02,854 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=110 updating hbase:meta row=23ada99ff478ca93f4b6bb0ba1556d72, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,41899,1689743683228 2023-07-19 05:15:02,854 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveA,,1689743699949.23ada99ff478ca93f4b6bb0ba1556d72.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689743702854"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689743702854"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689743702854"}]},"ts":"1689743702854"} 2023-07-19 05:15:02,860 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=111, ppid=110, state=RUNNABLE; CloseRegionProcedure 23ada99ff478ca93f4b6bb0ba1556d72, server=jenkins-hbase4.apache.org,41899,1689743683228}] 2023-07-19 05:15:02,948 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.MasterRpcServices(1230): Checking to see if procedure is done pid=109 2023-07-19 05:15:03,012 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 23ada99ff478ca93f4b6bb0ba1556d72 2023-07-19 05:15:03,013 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 23ada99ff478ca93f4b6bb0ba1556d72, disabling compactions & flushes 2023-07-19 05:15:03,013 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region GrouptestMultiTableMoveA,,1689743699949.23ada99ff478ca93f4b6bb0ba1556d72. 2023-07-19 05:15:03,013 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on GrouptestMultiTableMoveA,,1689743699949.23ada99ff478ca93f4b6bb0ba1556d72. 2023-07-19 05:15:03,013 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on GrouptestMultiTableMoveA,,1689743699949.23ada99ff478ca93f4b6bb0ba1556d72. after waiting 0 ms 2023-07-19 05:15:03,013 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region GrouptestMultiTableMoveA,,1689743699949.23ada99ff478ca93f4b6bb0ba1556d72. 2023-07-19 05:15:03,017 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/data/default/GrouptestMultiTableMoveA/23ada99ff478ca93f4b6bb0ba1556d72/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-19 05:15:03,017 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed GrouptestMultiTableMoveA,,1689743699949.23ada99ff478ca93f4b6bb0ba1556d72. 2023-07-19 05:15:03,017 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 23ada99ff478ca93f4b6bb0ba1556d72: 2023-07-19 05:15:03,019 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 23ada99ff478ca93f4b6bb0ba1556d72 2023-07-19 05:15:03,019 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=110 updating hbase:meta row=23ada99ff478ca93f4b6bb0ba1556d72, regionState=CLOSED 2023-07-19 05:15:03,019 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"GrouptestMultiTableMoveA,,1689743699949.23ada99ff478ca93f4b6bb0ba1556d72.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689743703019"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689743703019"}]},"ts":"1689743703019"} 2023-07-19 05:15:03,022 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=111, resume processing ppid=110 2023-07-19 05:15:03,022 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=111, ppid=110, state=SUCCESS; CloseRegionProcedure 23ada99ff478ca93f4b6bb0ba1556d72, server=jenkins-hbase4.apache.org,41899,1689743683228 in 161 msec 2023-07-19 05:15:03,023 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=110, resume processing ppid=109 2023-07-19 05:15:03,024 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=110, ppid=109, state=SUCCESS; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=23ada99ff478ca93f4b6bb0ba1556d72, UNASSIGN in 171 msec 2023-07-19 05:15:03,024 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveA","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689743703024"}]},"ts":"1689743703024"} 2023-07-19 05:15:03,025 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveA, state=DISABLED in hbase:meta 2023-07-19 05:15:03,027 INFO [PEWorker-3] procedure.DisableTableProcedure(305): Set GrouptestMultiTableMoveA to state=DISABLED 2023-07-19 05:15:03,028 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=109, state=SUCCESS; DisableTableProcedure table=GrouptestMultiTableMoveA in 184 msec 2023-07-19 05:15:03,149 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.MasterRpcServices(1230): Checking to see if procedure is done pid=109 2023-07-19 05:15:03,149 INFO [Listener at localhost/38799] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:GrouptestMultiTableMoveA, procId: 109 completed 2023-07-19 05:15:03,150 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.HMaster$5(2228): Client=jenkins//172.31.14.131 delete GrouptestMultiTableMoveA 2023-07-19 05:15:03,151 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] procedure2.ProcedureExecutor(1029): Stored pid=112, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=GrouptestMultiTableMoveA 2023-07-19 05:15:03,153 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=112, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveA 2023-07-19 05:15:03,153 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 'GrouptestMultiTableMoveA' from rsgroup 'Group_testMultiTableMove_272160960' 2023-07-19 05:15:03,154 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=112, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveA 2023-07-19 05:15:03,156 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 05:15:03,157 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_272160960 2023-07-19 05:15:03,157 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-19 05:15:03,158 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-19 05:15:03,159 DEBUG [HFileArchiver-8] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/.tmp/data/default/GrouptestMultiTableMoveA/23ada99ff478ca93f4b6bb0ba1556d72 2023-07-19 05:15:03,161 DEBUG [HFileArchiver-8] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/.tmp/data/default/GrouptestMultiTableMoveA/23ada99ff478ca93f4b6bb0ba1556d72/f, FileablePath, hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/.tmp/data/default/GrouptestMultiTableMoveA/23ada99ff478ca93f4b6bb0ba1556d72/recovered.edits] 2023-07-19 05:15:03,162 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.MasterRpcServices(1230): Checking to see if procedure is done pid=112 2023-07-19 05:15:03,166 DEBUG [HFileArchiver-8] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/.tmp/data/default/GrouptestMultiTableMoveA/23ada99ff478ca93f4b6bb0ba1556d72/recovered.edits/7.seqid to hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/archive/data/default/GrouptestMultiTableMoveA/23ada99ff478ca93f4b6bb0ba1556d72/recovered.edits/7.seqid 2023-07-19 05:15:03,166 DEBUG [HFileArchiver-8] backup.HFileArchiver(596): Deleted hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/.tmp/data/default/GrouptestMultiTableMoveA/23ada99ff478ca93f4b6bb0ba1556d72 2023-07-19 05:15:03,166 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(328): Archived GrouptestMultiTableMoveA regions 2023-07-19 05:15:03,169 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=112, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveA 2023-07-19 05:15:03,171 WARN [PEWorker-4] procedure.DeleteTableProcedure(384): Deleting some vestigial 1 rows of GrouptestMultiTableMoveA from hbase:meta 2023-07-19 05:15:03,173 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(421): Removing 'GrouptestMultiTableMoveA' descriptor. 2023-07-19 05:15:03,174 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=112, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveA 2023-07-19 05:15:03,174 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(411): Removing 'GrouptestMultiTableMoveA' from region states. 2023-07-19 05:15:03,174 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"GrouptestMultiTableMoveA,,1689743699949.23ada99ff478ca93f4b6bb0ba1556d72.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689743703174"}]},"ts":"9223372036854775807"} 2023-07-19 05:15:03,176 INFO [PEWorker-4] hbase.MetaTableAccessor(1788): Deleted 1 regions from META 2023-07-19 05:15:03,176 DEBUG [PEWorker-4] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => 23ada99ff478ca93f4b6bb0ba1556d72, NAME => 'GrouptestMultiTableMoveA,,1689743699949.23ada99ff478ca93f4b6bb0ba1556d72.', STARTKEY => '', ENDKEY => ''}] 2023-07-19 05:15:03,176 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(415): Marking 'GrouptestMultiTableMoveA' as deleted. 2023-07-19 05:15:03,176 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"GrouptestMultiTableMoveA","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1689743703176"}]},"ts":"9223372036854775807"} 2023-07-19 05:15:03,178 INFO [PEWorker-4] hbase.MetaTableAccessor(1658): Deleted table GrouptestMultiTableMoveA state from META 2023-07-19 05:15:03,180 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(130): Finished pid=112, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveA 2023-07-19 05:15:03,181 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=112, state=SUCCESS; DeleteTableProcedure table=GrouptestMultiTableMoveA in 30 msec 2023-07-19 05:15:03,263 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.MasterRpcServices(1230): Checking to see if procedure is done pid=112 2023-07-19 05:15:03,263 INFO [Listener at localhost/38799] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: default:GrouptestMultiTableMoveA, procId: 112 completed 2023-07-19 05:15:03,264 INFO [Listener at localhost/38799] client.HBaseAdmin$15(890): Started disable of GrouptestMultiTableMoveB 2023-07-19 05:15:03,265 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable GrouptestMultiTableMoveB 2023-07-19 05:15:03,266 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] procedure2.ProcedureExecutor(1029): Stored pid=113, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=GrouptestMultiTableMoveB 2023-07-19 05:15:03,272 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.MasterRpcServices(1230): Checking to see if procedure is done pid=113 2023-07-19 05:15:03,273 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveB","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689743703273"}]},"ts":"1689743703273"} 2023-07-19 05:15:03,275 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveB, state=DISABLING in hbase:meta 2023-07-19 05:15:03,277 INFO [PEWorker-5] procedure.DisableTableProcedure(293): Set GrouptestMultiTableMoveB to state=DISABLING 2023-07-19 05:15:03,279 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=114, ppid=113, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=9369c69453d5a5f496a014a4557b0be4, UNASSIGN}] 2023-07-19 05:15:03,281 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=114, ppid=113, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=9369c69453d5a5f496a014a4557b0be4, UNASSIGN 2023-07-19 05:15:03,282 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=114 updating hbase:meta row=9369c69453d5a5f496a014a4557b0be4, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,41899,1689743683228 2023-07-19 05:15:03,282 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveB,,1689743700564.9369c69453d5a5f496a014a4557b0be4.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689743703282"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689743703282"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689743703282"}]},"ts":"1689743703282"} 2023-07-19 05:15:03,284 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=115, ppid=114, state=RUNNABLE; CloseRegionProcedure 9369c69453d5a5f496a014a4557b0be4, server=jenkins-hbase4.apache.org,41899,1689743683228}] 2023-07-19 05:15:03,373 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.MasterRpcServices(1230): Checking to see if procedure is done pid=113 2023-07-19 05:15:03,436 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 9369c69453d5a5f496a014a4557b0be4 2023-07-19 05:15:03,437 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 9369c69453d5a5f496a014a4557b0be4, disabling compactions & flushes 2023-07-19 05:15:03,437 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region GrouptestMultiTableMoveB,,1689743700564.9369c69453d5a5f496a014a4557b0be4. 2023-07-19 05:15:03,437 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on GrouptestMultiTableMoveB,,1689743700564.9369c69453d5a5f496a014a4557b0be4. 2023-07-19 05:15:03,438 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on GrouptestMultiTableMoveB,,1689743700564.9369c69453d5a5f496a014a4557b0be4. after waiting 0 ms 2023-07-19 05:15:03,438 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region GrouptestMultiTableMoveB,,1689743700564.9369c69453d5a5f496a014a4557b0be4. 2023-07-19 05:15:03,441 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/data/default/GrouptestMultiTableMoveB/9369c69453d5a5f496a014a4557b0be4/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-19 05:15:03,442 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed GrouptestMultiTableMoveB,,1689743700564.9369c69453d5a5f496a014a4557b0be4. 2023-07-19 05:15:03,442 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 9369c69453d5a5f496a014a4557b0be4: 2023-07-19 05:15:03,444 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 9369c69453d5a5f496a014a4557b0be4 2023-07-19 05:15:03,445 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=114 updating hbase:meta row=9369c69453d5a5f496a014a4557b0be4, regionState=CLOSED 2023-07-19 05:15:03,445 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"GrouptestMultiTableMoveB,,1689743700564.9369c69453d5a5f496a014a4557b0be4.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689743703445"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689743703445"}]},"ts":"1689743703445"} 2023-07-19 05:15:03,452 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=115, resume processing ppid=114 2023-07-19 05:15:03,452 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=115, ppid=114, state=SUCCESS; CloseRegionProcedure 9369c69453d5a5f496a014a4557b0be4, server=jenkins-hbase4.apache.org,41899,1689743683228 in 162 msec 2023-07-19 05:15:03,453 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=114, resume processing ppid=113 2023-07-19 05:15:03,453 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=114, ppid=113, state=SUCCESS; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=9369c69453d5a5f496a014a4557b0be4, UNASSIGN in 173 msec 2023-07-19 05:15:03,454 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveB","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689743703454"}]},"ts":"1689743703454"} 2023-07-19 05:15:03,455 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveB, state=DISABLED in hbase:meta 2023-07-19 05:15:03,457 INFO [PEWorker-5] procedure.DisableTableProcedure(305): Set GrouptestMultiTableMoveB to state=DISABLED 2023-07-19 05:15:03,460 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=113, state=SUCCESS; DisableTableProcedure table=GrouptestMultiTableMoveB in 193 msec 2023-07-19 05:15:03,575 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.MasterRpcServices(1230): Checking to see if procedure is done pid=113 2023-07-19 05:15:03,575 INFO [Listener at localhost/38799] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:GrouptestMultiTableMoveB, procId: 113 completed 2023-07-19 05:15:03,576 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.HMaster$5(2228): Client=jenkins//172.31.14.131 delete GrouptestMultiTableMoveB 2023-07-19 05:15:03,577 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] procedure2.ProcedureExecutor(1029): Stored pid=116, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=GrouptestMultiTableMoveB 2023-07-19 05:15:03,579 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=116, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveB 2023-07-19 05:15:03,579 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 'GrouptestMultiTableMoveB' from rsgroup 'Group_testMultiTableMove_272160960' 2023-07-19 05:15:03,580 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=116, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveB 2023-07-19 05:15:03,584 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 05:15:03,584 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_272160960 2023-07-19 05:15:03,585 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-19 05:15:03,585 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/.tmp/data/default/GrouptestMultiTableMoveB/9369c69453d5a5f496a014a4557b0be4 2023-07-19 05:15:03,585 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-19 05:15:03,587 DEBUG [HFileArchiver-2] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/.tmp/data/default/GrouptestMultiTableMoveB/9369c69453d5a5f496a014a4557b0be4/f, FileablePath, hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/.tmp/data/default/GrouptestMultiTableMoveB/9369c69453d5a5f496a014a4557b0be4/recovered.edits] 2023-07-19 05:15:03,591 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.MasterRpcServices(1230): Checking to see if procedure is done pid=116 2023-07-19 05:15:03,594 DEBUG [HFileArchiver-2] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/.tmp/data/default/GrouptestMultiTableMoveB/9369c69453d5a5f496a014a4557b0be4/recovered.edits/7.seqid to hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/archive/data/default/GrouptestMultiTableMoveB/9369c69453d5a5f496a014a4557b0be4/recovered.edits/7.seqid 2023-07-19 05:15:03,595 DEBUG [HFileArchiver-2] backup.HFileArchiver(596): Deleted hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/.tmp/data/default/GrouptestMultiTableMoveB/9369c69453d5a5f496a014a4557b0be4 2023-07-19 05:15:03,595 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(328): Archived GrouptestMultiTableMoveB regions 2023-07-19 05:15:03,598 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=116, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveB 2023-07-19 05:15:03,601 WARN [PEWorker-2] procedure.DeleteTableProcedure(384): Deleting some vestigial 1 rows of GrouptestMultiTableMoveB from hbase:meta 2023-07-19 05:15:03,603 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(421): Removing 'GrouptestMultiTableMoveB' descriptor. 2023-07-19 05:15:03,604 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=116, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveB 2023-07-19 05:15:03,604 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(411): Removing 'GrouptestMultiTableMoveB' from region states. 2023-07-19 05:15:03,605 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"GrouptestMultiTableMoveB,,1689743700564.9369c69453d5a5f496a014a4557b0be4.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689743703605"}]},"ts":"9223372036854775807"} 2023-07-19 05:15:03,606 INFO [PEWorker-2] hbase.MetaTableAccessor(1788): Deleted 1 regions from META 2023-07-19 05:15:03,606 DEBUG [PEWorker-2] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => 9369c69453d5a5f496a014a4557b0be4, NAME => 'GrouptestMultiTableMoveB,,1689743700564.9369c69453d5a5f496a014a4557b0be4.', STARTKEY => '', ENDKEY => ''}] 2023-07-19 05:15:03,606 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(415): Marking 'GrouptestMultiTableMoveB' as deleted. 2023-07-19 05:15:03,606 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"GrouptestMultiTableMoveB","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1689743703606"}]},"ts":"9223372036854775807"} 2023-07-19 05:15:03,608 INFO [PEWorker-2] hbase.MetaTableAccessor(1658): Deleted table GrouptestMultiTableMoveB state from META 2023-07-19 05:15:03,610 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(130): Finished pid=116, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveB 2023-07-19 05:15:03,611 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=116, state=SUCCESS; DeleteTableProcedure table=GrouptestMultiTableMoveB in 34 msec 2023-07-19 05:15:03,692 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.MasterRpcServices(1230): Checking to see if procedure is done pid=116 2023-07-19 05:15:03,692 INFO [Listener at localhost/38799] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: default:GrouptestMultiTableMoveB, procId: 116 completed 2023-07-19 05:15:03,696 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-19 05:15:03,696 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-19 05:15:03,697 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-19 05:15:03,697 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-19 05:15:03,697 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-19 05:15:03,698 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:41899] to rsgroup default 2023-07-19 05:15:03,701 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 05:15:03,701 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_272160960 2023-07-19 05:15:03,702 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-19 05:15:03,702 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-19 05:15:03,704 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group Group_testMultiTableMove_272160960, current retry=0 2023-07-19 05:15:03,704 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,41899,1689743683228] are moved back to Group_testMultiTableMove_272160960 2023-07-19 05:15:03,705 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminServer(438): Move servers done: Group_testMultiTableMove_272160960 => default 2023-07-19 05:15:03,705 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-19 05:15:03,705 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup Group_testMultiTableMove_272160960 2023-07-19 05:15:03,708 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 05:15:03,709 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-19 05:15:03,709 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-19 05:15:03,711 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-19 05:15:03,711 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-19 05:15:03,711 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-19 05:15:03,711 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-19 05:15:03,712 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-19 05:15:03,712 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-19 05:15:03,713 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-19 05:15:03,716 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 05:15:03,716 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-19 05:15:03,717 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-19 05:15:03,720 INFO [Listener at localhost/38799] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-19 05:15:03,721 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-19 05:15:03,722 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 05:15:03,723 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-19 05:15:03,724 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-19 05:15:03,731 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-19 05:15:03,734 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-19 05:15:03,734 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-19 05:15:03,736 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:35853] to rsgroup master 2023-07-19 05:15:03,736 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:35853 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-19 05:15:03,736 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] ipc.CallRunner(144): callId: 512 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:46730 deadline: 1689744903736, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:35853 is either offline or it does not exist. 2023-07-19 05:15:03,737 WARN [Listener at localhost/38799] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:35853 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:35853 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-19 05:15:03,740 INFO [Listener at localhost/38799] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-19 05:15:03,740 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-19 05:15:03,740 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-19 05:15:03,741 INFO [Listener at localhost/38799] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:41899, jenkins-hbase4.apache.org:41979, jenkins-hbase4.apache.org:43237, jenkins-hbase4.apache.org:45681], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-19 05:15:03,741 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-19 05:15:03,741 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-19 05:15:03,763 INFO [Listener at localhost/38799] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testMultiTableMove Thread=508 (was 511), OpenFileDescriptor=814 (was 815), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=409 (was 350) - SystemLoadAverage LEAK? -, ProcessCount=173 (was 173), AvailableMemoryMB=3127 (was 3186) 2023-07-19 05:15:03,764 WARN [Listener at localhost/38799] hbase.ResourceChecker(130): Thread=508 is superior to 500 2023-07-19 05:15:03,782 INFO [Listener at localhost/38799] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testRenameRSGroupConstraints Thread=508, OpenFileDescriptor=796, MaxFileDescriptor=60000, SystemLoadAverage=409, ProcessCount=173, AvailableMemoryMB=3127 2023-07-19 05:15:03,782 WARN [Listener at localhost/38799] hbase.ResourceChecker(130): Thread=508 is superior to 500 2023-07-19 05:15:03,783 INFO [Listener at localhost/38799] rsgroup.TestRSGroupsBase(132): testRenameRSGroupConstraints 2023-07-19 05:15:03,788 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-19 05:15:03,788 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-19 05:15:03,793 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-19 05:15:03,793 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-19 05:15:03,793 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-19 05:15:03,794 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-19 05:15:03,794 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-19 05:15:03,795 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-19 05:15:03,799 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 05:15:03,799 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-19 05:15:03,801 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-19 05:15:03,804 INFO [Listener at localhost/38799] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-19 05:15:03,805 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-19 05:15:03,807 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 05:15:03,807 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-19 05:15:03,809 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-19 05:15:03,811 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-19 05:15:03,814 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-19 05:15:03,814 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-19 05:15:03,816 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:35853] to rsgroup master 2023-07-19 05:15:03,816 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:35853 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-19 05:15:03,816 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] ipc.CallRunner(144): callId: 540 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:46730 deadline: 1689744903815, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:35853 is either offline or it does not exist. 2023-07-19 05:15:03,816 WARN [Listener at localhost/38799] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:35853 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:35853 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-19 05:15:03,818 INFO [Listener at localhost/38799] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-19 05:15:03,818 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-19 05:15:03,819 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-19 05:15:03,819 INFO [Listener at localhost/38799] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:41899, jenkins-hbase4.apache.org:41979, jenkins-hbase4.apache.org:43237, jenkins-hbase4.apache.org:45681], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-19 05:15:03,820 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-19 05:15:03,820 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-19 05:15:03,820 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-19 05:15:03,821 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-19 05:15:03,821 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup oldGroup 2023-07-19 05:15:03,823 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 05:15:03,824 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldGroup 2023-07-19 05:15:03,827 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-19 05:15:03,827 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-19 05:15:03,832 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-19 05:15:03,835 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-19 05:15:03,836 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-19 05:15:03,838 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:41899, jenkins-hbase4.apache.org:41979] to rsgroup oldGroup 2023-07-19 05:15:03,841 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 05:15:03,841 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldGroup 2023-07-19 05:15:03,841 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-19 05:15:03,842 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-19 05:15:03,844 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group default, current retry=0 2023-07-19 05:15:03,844 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,41899,1689743683228, jenkins-hbase4.apache.org,41979,1689743683435] are moved back to default 2023-07-19 05:15:03,844 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminServer(438): Move servers done: default => oldGroup 2023-07-19 05:15:03,844 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-19 05:15:03,847 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-19 05:15:03,847 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-19 05:15:03,849 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=oldGroup 2023-07-19 05:15:03,850 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-19 05:15:03,850 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=oldGroup 2023-07-19 05:15:03,850 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-19 05:15:03,851 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-19 05:15:03,851 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-19 05:15:03,852 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup anotherRSGroup 2023-07-19 05:15:03,854 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 05:15:03,855 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/anotherRSGroup 2023-07-19 05:15:03,856 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldGroup 2023-07-19 05:15:03,857 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-19 05:15:03,857 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-19 05:15:03,858 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-19 05:15:03,861 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-19 05:15:03,861 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-19 05:15:03,864 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:43237] to rsgroup anotherRSGroup 2023-07-19 05:15:03,867 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 05:15:03,867 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/anotherRSGroup 2023-07-19 05:15:03,867 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldGroup 2023-07-19 05:15:03,868 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-19 05:15:03,868 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-19 05:15:03,870 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group default, current retry=0 2023-07-19 05:15:03,870 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,43237,1689743687175] are moved back to default 2023-07-19 05:15:03,870 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminServer(438): Move servers done: default => anotherRSGroup 2023-07-19 05:15:03,870 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-19 05:15:03,873 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-19 05:15:03,873 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-19 05:15:03,875 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=anotherRSGroup 2023-07-19 05:15:03,875 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-19 05:15:03,876 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=anotherRSGroup 2023-07-19 05:15:03,876 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-19 05:15:03,881 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(408): Client=jenkins//172.31.14.131 rename rsgroup from nonExistingRSGroup to newRSGroup1 2023-07-19 05:15:03,881 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup nonExistingRSGroup does not exist at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.renameRSGroup(RSGroupInfoManagerImpl.java:407) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.renameRSGroup(RSGroupAdminServer.java:617) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.renameRSGroup(RSGroupAdminEndpoint.java:417) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16233) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-19 05:15:03,881 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] ipc.CallRunner(144): callId: 574 service: MasterService methodName: ExecMasterService size: 113 connection: 172.31.14.131:46730 deadline: 1689744903880, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup nonExistingRSGroup does not exist 2023-07-19 05:15:03,882 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(408): Client=jenkins//172.31.14.131 rename rsgroup from oldGroup to anotherRSGroup 2023-07-19 05:15:03,883 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Group already exists: anotherRSGroup at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.renameRSGroup(RSGroupInfoManagerImpl.java:410) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.renameRSGroup(RSGroupAdminServer.java:617) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.renameRSGroup(RSGroupAdminEndpoint.java:417) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16233) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-19 05:15:03,883 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] ipc.CallRunner(144): callId: 576 service: MasterService methodName: ExecMasterService size: 106 connection: 172.31.14.131:46730 deadline: 1689744903882, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Group already exists: anotherRSGroup 2023-07-19 05:15:03,883 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(408): Client=jenkins//172.31.14.131 rename rsgroup from default to newRSGroup2 2023-07-19 05:15:03,884 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Can't rename default rsgroup at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.renameRSGroup(RSGroupInfoManagerImpl.java:403) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.renameRSGroup(RSGroupAdminServer.java:617) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.renameRSGroup(RSGroupAdminEndpoint.java:417) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16233) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-19 05:15:03,884 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] ipc.CallRunner(144): callId: 578 service: MasterService methodName: ExecMasterService size: 102 connection: 172.31.14.131:46730 deadline: 1689744903883, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Can't rename default rsgroup 2023-07-19 05:15:03,884 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(408): Client=jenkins//172.31.14.131 rename rsgroup from oldGroup to default 2023-07-19 05:15:03,884 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Group already exists: default at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.renameRSGroup(RSGroupInfoManagerImpl.java:410) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.renameRSGroup(RSGroupAdminServer.java:617) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.renameRSGroup(RSGroupAdminEndpoint.java:417) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16233) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-19 05:15:03,885 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] ipc.CallRunner(144): callId: 580 service: MasterService methodName: ExecMasterService size: 99 connection: 172.31.14.131:46730 deadline: 1689744903884, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Group already exists: default 2023-07-19 05:15:03,888 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-19 05:15:03,888 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-19 05:15:03,889 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-19 05:15:03,889 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-19 05:15:03,889 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-19 05:15:03,890 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:43237] to rsgroup default 2023-07-19 05:15:03,892 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 05:15:03,893 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/anotherRSGroup 2023-07-19 05:15:03,893 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldGroup 2023-07-19 05:15:03,894 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-19 05:15:03,894 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-19 05:15:03,896 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group anotherRSGroup, current retry=0 2023-07-19 05:15:03,896 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,43237,1689743687175] are moved back to anotherRSGroup 2023-07-19 05:15:03,896 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminServer(438): Move servers done: anotherRSGroup => default 2023-07-19 05:15:03,896 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-19 05:15:03,897 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup anotherRSGroup 2023-07-19 05:15:03,900 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 05:15:03,900 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldGroup 2023-07-19 05:15:03,900 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-19 05:15:03,901 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 7 2023-07-19 05:15:03,905 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-19 05:15:03,906 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-19 05:15:03,906 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-19 05:15:03,906 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-19 05:15:03,907 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:41899, jenkins-hbase4.apache.org:41979] to rsgroup default 2023-07-19 05:15:03,908 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 05:15:03,909 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldGroup 2023-07-19 05:15:03,909 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-19 05:15:03,909 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-19 05:15:03,912 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group oldGroup, current retry=0 2023-07-19 05:15:03,912 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,41899,1689743683228, jenkins-hbase4.apache.org,41979,1689743683435] are moved back to oldGroup 2023-07-19 05:15:03,912 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminServer(438): Move servers done: oldGroup => default 2023-07-19 05:15:03,912 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-19 05:15:03,913 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup oldGroup 2023-07-19 05:15:03,916 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 05:15:03,916 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-19 05:15:03,917 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-19 05:15:03,918 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-19 05:15:03,919 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-19 05:15:03,919 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-19 05:15:03,919 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-19 05:15:03,920 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-19 05:15:03,920 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-19 05:15:03,920 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-19 05:15:03,923 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 05:15:03,923 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-19 05:15:03,925 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-19 05:15:03,927 INFO [Listener at localhost/38799] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-19 05:15:03,928 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-19 05:15:03,929 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 05:15:03,929 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-19 05:15:03,931 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-19 05:15:03,933 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-19 05:15:03,935 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-19 05:15:03,935 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-19 05:15:03,937 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:35853] to rsgroup master 2023-07-19 05:15:03,937 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:35853 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-19 05:15:03,937 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] ipc.CallRunner(144): callId: 616 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:46730 deadline: 1689744903937, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:35853 is either offline or it does not exist. 2023-07-19 05:15:03,938 WARN [Listener at localhost/38799] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:35853 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:35853 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-19 05:15:03,940 INFO [Listener at localhost/38799] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-19 05:15:03,940 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-19 05:15:03,941 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-19 05:15:03,941 INFO [Listener at localhost/38799] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:41899, jenkins-hbase4.apache.org:41979, jenkins-hbase4.apache.org:43237, jenkins-hbase4.apache.org:45681], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-19 05:15:03,941 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-19 05:15:03,942 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-19 05:15:03,962 INFO [Listener at localhost/38799] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testRenameRSGroupConstraints Thread=512 (was 508) Potentially hanging thread: hconnection-0x5b070797-shared-pool-17 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x5b070797-shared-pool-19 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x5b070797-shared-pool-16 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x5b070797-shared-pool-18 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=781 (was 796), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=409 (was 409), ProcessCount=173 (was 173), AvailableMemoryMB=3126 (was 3127) 2023-07-19 05:15:03,962 WARN [Listener at localhost/38799] hbase.ResourceChecker(130): Thread=512 is superior to 500 2023-07-19 05:15:03,982 INFO [Listener at localhost/38799] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testRenameRSGroup Thread=512, OpenFileDescriptor=781, MaxFileDescriptor=60000, SystemLoadAverage=409, ProcessCount=173, AvailableMemoryMB=3125 2023-07-19 05:15:03,982 WARN [Listener at localhost/38799] hbase.ResourceChecker(130): Thread=512 is superior to 500 2023-07-19 05:15:03,982 INFO [Listener at localhost/38799] rsgroup.TestRSGroupsBase(132): testRenameRSGroup 2023-07-19 05:15:03,988 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-19 05:15:03,988 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-19 05:15:03,989 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-19 05:15:03,989 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-19 05:15:03,989 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-19 05:15:03,989 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-19 05:15:03,990 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-19 05:15:03,990 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-19 05:15:03,994 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 05:15:03,994 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-19 05:15:03,996 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-19 05:15:03,998 INFO [Listener at localhost/38799] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-19 05:15:03,999 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-19 05:15:04,002 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 05:15:04,002 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-19 05:15:04,004 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-19 05:15:04,005 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-19 05:15:04,007 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-19 05:15:04,007 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-19 05:15:04,009 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:35853] to rsgroup master 2023-07-19 05:15:04,009 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:35853 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-19 05:15:04,009 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] ipc.CallRunner(144): callId: 644 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:46730 deadline: 1689744904009, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:35853 is either offline or it does not exist. 2023-07-19 05:15:04,010 WARN [Listener at localhost/38799] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:35853 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:35853 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-19 05:15:04,011 INFO [Listener at localhost/38799] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-19 05:15:04,012 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-19 05:15:04,012 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-19 05:15:04,012 INFO [Listener at localhost/38799] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:41899, jenkins-hbase4.apache.org:41979, jenkins-hbase4.apache.org:43237, jenkins-hbase4.apache.org:45681], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-19 05:15:04,013 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-19 05:15:04,013 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-19 05:15:04,014 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-19 05:15:04,014 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-19 05:15:04,014 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup oldgroup 2023-07-19 05:15:04,016 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-19 05:15:04,019 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 05:15:04,019 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-19 05:15:04,020 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-19 05:15:04,024 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-19 05:15:04,026 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-19 05:15:04,026 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-19 05:15:04,029 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:41899, jenkins-hbase4.apache.org:41979] to rsgroup oldgroup 2023-07-19 05:15:04,031 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-19 05:15:04,031 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 05:15:04,032 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-19 05:15:04,032 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-19 05:15:04,033 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group default, current retry=0 2023-07-19 05:15:04,033 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,41899,1689743683228, jenkins-hbase4.apache.org,41979,1689743683435] are moved back to default 2023-07-19 05:15:04,033 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminServer(438): Move servers done: default => oldgroup 2023-07-19 05:15:04,033 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-19 05:15:04,036 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-19 05:15:04,036 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-19 05:15:04,038 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=oldgroup 2023-07-19 05:15:04,038 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-19 05:15:04,039 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'testRename', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'tr', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-19 05:15:04,040 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] procedure2.ProcedureExecutor(1029): Stored pid=117, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=testRename 2023-07-19 05:15:04,042 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=117, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=testRename execute state=CREATE_TABLE_PRE_OPERATION 2023-07-19 05:15:04,042 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "testRename" procId is: 117 2023-07-19 05:15:04,043 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.MasterRpcServices(1230): Checking to see if procedure is done pid=117 2023-07-19 05:15:04,044 DEBUG [PEWorker-1] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-19 05:15:04,044 DEBUG [PEWorker-1] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 05:15:04,044 DEBUG [PEWorker-1] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-19 05:15:04,045 DEBUG [PEWorker-1] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-19 05:15:04,048 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=117, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=testRename execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-19 05:15:04,049 DEBUG [HFileArchiver-5] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/.tmp/data/default/testRename/abc282e5f7835310e284bab60e5bb44a 2023-07-19 05:15:04,050 DEBUG [HFileArchiver-5] backup.HFileArchiver(153): Directory hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/.tmp/data/default/testRename/abc282e5f7835310e284bab60e5bb44a empty. 2023-07-19 05:15:04,050 DEBUG [HFileArchiver-5] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/.tmp/data/default/testRename/abc282e5f7835310e284bab60e5bb44a 2023-07-19 05:15:04,050 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(328): Archived testRename regions 2023-07-19 05:15:04,065 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/.tmp/data/default/testRename/.tabledesc/.tableinfo.0000000001 2023-07-19 05:15:04,066 INFO [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(7675): creating {ENCODED => abc282e5f7835310e284bab60e5bb44a, NAME => 'testRename,,1689743704039.abc282e5f7835310e284bab60e5bb44a.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='testRename', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'tr', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/.tmp 2023-07-19 05:15:04,079 DEBUG [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(866): Instantiated testRename,,1689743704039.abc282e5f7835310e284bab60e5bb44a.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-19 05:15:04,079 DEBUG [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(1604): Closing abc282e5f7835310e284bab60e5bb44a, disabling compactions & flushes 2023-07-19 05:15:04,079 INFO [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(1626): Closing region testRename,,1689743704039.abc282e5f7835310e284bab60e5bb44a. 2023-07-19 05:15:04,079 DEBUG [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on testRename,,1689743704039.abc282e5f7835310e284bab60e5bb44a. 2023-07-19 05:15:04,079 DEBUG [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(1714): Acquired close lock on testRename,,1689743704039.abc282e5f7835310e284bab60e5bb44a. after waiting 0 ms 2023-07-19 05:15:04,079 DEBUG [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(1724): Updates disabled for region testRename,,1689743704039.abc282e5f7835310e284bab60e5bb44a. 2023-07-19 05:15:04,079 INFO [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(1838): Closed testRename,,1689743704039.abc282e5f7835310e284bab60e5bb44a. 2023-07-19 05:15:04,080 DEBUG [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(1558): Region close journal for abc282e5f7835310e284bab60e5bb44a: 2023-07-19 05:15:04,082 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=117, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=testRename execute state=CREATE_TABLE_ADD_TO_META 2023-07-19 05:15:04,082 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"testRename,,1689743704039.abc282e5f7835310e284bab60e5bb44a.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689743704082"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689743704082"}]},"ts":"1689743704082"} 2023-07-19 05:15:04,084 INFO [PEWorker-1] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-19 05:15:04,084 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=117, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=testRename execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-19 05:15:04,084 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"testRename","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689743704084"}]},"ts":"1689743704084"} 2023-07-19 05:15:04,085 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=testRename, state=ENABLING in hbase:meta 2023-07-19 05:15:04,088 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-19 05:15:04,088 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-19 05:15:04,088 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-19 05:15:04,088 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-19 05:15:04,089 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=118, ppid=117, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=testRename, region=abc282e5f7835310e284bab60e5bb44a, ASSIGN}] 2023-07-19 05:15:04,090 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=118, ppid=117, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=testRename, region=abc282e5f7835310e284bab60e5bb44a, ASSIGN 2023-07-19 05:15:04,091 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=118, ppid=117, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=testRename, region=abc282e5f7835310e284bab60e5bb44a, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,45681,1689743683028; forceNewPlan=false, retain=false 2023-07-19 05:15:04,144 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.MasterRpcServices(1230): Checking to see if procedure is done pid=117 2023-07-19 05:15:04,241 INFO [jenkins-hbase4:35853] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-19 05:15:04,242 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=118 updating hbase:meta row=abc282e5f7835310e284bab60e5bb44a, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,45681,1689743683028 2023-07-19 05:15:04,243 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"testRename,,1689743704039.abc282e5f7835310e284bab60e5bb44a.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689743704242"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689743704242"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689743704242"}]},"ts":"1689743704242"} 2023-07-19 05:15:04,244 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=119, ppid=118, state=RUNNABLE; OpenRegionProcedure abc282e5f7835310e284bab60e5bb44a, server=jenkins-hbase4.apache.org,45681,1689743683028}] 2023-07-19 05:15:04,345 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.MasterRpcServices(1230): Checking to see if procedure is done pid=117 2023-07-19 05:15:04,402 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open testRename,,1689743704039.abc282e5f7835310e284bab60e5bb44a. 2023-07-19 05:15:04,402 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => abc282e5f7835310e284bab60e5bb44a, NAME => 'testRename,,1689743704039.abc282e5f7835310e284bab60e5bb44a.', STARTKEY => '', ENDKEY => ''} 2023-07-19 05:15:04,403 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table testRename abc282e5f7835310e284bab60e5bb44a 2023-07-19 05:15:04,403 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated testRename,,1689743704039.abc282e5f7835310e284bab60e5bb44a.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-19 05:15:04,403 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for abc282e5f7835310e284bab60e5bb44a 2023-07-19 05:15:04,403 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for abc282e5f7835310e284bab60e5bb44a 2023-07-19 05:15:04,405 INFO [StoreOpener-abc282e5f7835310e284bab60e5bb44a-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family tr of region abc282e5f7835310e284bab60e5bb44a 2023-07-19 05:15:04,406 DEBUG [StoreOpener-abc282e5f7835310e284bab60e5bb44a-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/data/default/testRename/abc282e5f7835310e284bab60e5bb44a/tr 2023-07-19 05:15:04,407 DEBUG [StoreOpener-abc282e5f7835310e284bab60e5bb44a-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/data/default/testRename/abc282e5f7835310e284bab60e5bb44a/tr 2023-07-19 05:15:04,407 INFO [StoreOpener-abc282e5f7835310e284bab60e5bb44a-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region abc282e5f7835310e284bab60e5bb44a columnFamilyName tr 2023-07-19 05:15:04,408 INFO [StoreOpener-abc282e5f7835310e284bab60e5bb44a-1] regionserver.HStore(310): Store=abc282e5f7835310e284bab60e5bb44a/tr, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-19 05:15:04,408 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/data/default/testRename/abc282e5f7835310e284bab60e5bb44a 2023-07-19 05:15:04,409 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/data/default/testRename/abc282e5f7835310e284bab60e5bb44a 2023-07-19 05:15:04,411 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for abc282e5f7835310e284bab60e5bb44a 2023-07-19 05:15:04,417 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/data/default/testRename/abc282e5f7835310e284bab60e5bb44a/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-19 05:15:04,418 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened abc282e5f7835310e284bab60e5bb44a; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10867158560, jitterRate=0.012083008885383606}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-19 05:15:04,418 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for abc282e5f7835310e284bab60e5bb44a: 2023-07-19 05:15:04,418 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for testRename,,1689743704039.abc282e5f7835310e284bab60e5bb44a., pid=119, masterSystemTime=1689743704396 2023-07-19 05:15:04,420 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for testRename,,1689743704039.abc282e5f7835310e284bab60e5bb44a. 2023-07-19 05:15:04,420 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened testRename,,1689743704039.abc282e5f7835310e284bab60e5bb44a. 2023-07-19 05:15:04,420 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=118 updating hbase:meta row=abc282e5f7835310e284bab60e5bb44a, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,45681,1689743683028 2023-07-19 05:15:04,420 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"testRename,,1689743704039.abc282e5f7835310e284bab60e5bb44a.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689743704420"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689743704420"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689743704420"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689743704420"}]},"ts":"1689743704420"} 2023-07-19 05:15:04,423 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=119, resume processing ppid=118 2023-07-19 05:15:04,423 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=119, ppid=118, state=SUCCESS; OpenRegionProcedure abc282e5f7835310e284bab60e5bb44a, server=jenkins-hbase4.apache.org,45681,1689743683028 in 178 msec 2023-07-19 05:15:04,425 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=118, resume processing ppid=117 2023-07-19 05:15:04,425 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=118, ppid=117, state=SUCCESS; TransitRegionStateProcedure table=testRename, region=abc282e5f7835310e284bab60e5bb44a, ASSIGN in 334 msec 2023-07-19 05:15:04,425 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=117, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=testRename execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-19 05:15:04,426 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"testRename","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689743704426"}]},"ts":"1689743704426"} 2023-07-19 05:15:04,427 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=testRename, state=ENABLED in hbase:meta 2023-07-19 05:15:04,429 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=117, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=testRename execute state=CREATE_TABLE_POST_OPERATION 2023-07-19 05:15:04,430 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=117, state=SUCCESS; CreateTableProcedure table=testRename in 389 msec 2023-07-19 05:15:04,647 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.MasterRpcServices(1230): Checking to see if procedure is done pid=117 2023-07-19 05:15:04,647 INFO [Listener at localhost/38799] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:testRename, procId: 117 completed 2023-07-19 05:15:04,647 DEBUG [Listener at localhost/38799] hbase.HBaseTestingUtility(3430): Waiting until all regions of table testRename get assigned. Timeout = 60000ms 2023-07-19 05:15:04,647 INFO [Listener at localhost/38799] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-19 05:15:04,654 INFO [Listener at localhost/38799] hbase.HBaseTestingUtility(3484): All regions for table testRename assigned to meta. Checking AM states. 2023-07-19 05:15:04,654 INFO [Listener at localhost/38799] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-19 05:15:04,654 INFO [Listener at localhost/38799] hbase.HBaseTestingUtility(3504): All regions for table testRename assigned. 2023-07-19 05:15:04,656 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [testRename] to rsgroup oldgroup 2023-07-19 05:15:04,658 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-19 05:15:04,659 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 05:15:04,659 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-19 05:15:04,660 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-19 05:15:04,661 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminServer(339): Moving region(s) for table testRename to RSGroup oldgroup 2023-07-19 05:15:04,661 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminServer(345): Moving region abc282e5f7835310e284bab60e5bb44a to RSGroup oldgroup 2023-07-19 05:15:04,662 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-19 05:15:04,662 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-19 05:15:04,662 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-19 05:15:04,662 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-19 05:15:04,662 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-19 05:15:04,663 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] procedure2.ProcedureExecutor(1029): Stored pid=120, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=testRename, region=abc282e5f7835310e284bab60e5bb44a, REOPEN/MOVE 2023-07-19 05:15:04,663 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminServer(286): Moving 1 region(s) to group oldgroup, current retry=0 2023-07-19 05:15:04,663 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=120, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=testRename, region=abc282e5f7835310e284bab60e5bb44a, REOPEN/MOVE 2023-07-19 05:15:04,663 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=120 updating hbase:meta row=abc282e5f7835310e284bab60e5bb44a, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,45681,1689743683028 2023-07-19 05:15:04,664 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"testRename,,1689743704039.abc282e5f7835310e284bab60e5bb44a.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689743704663"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689743704663"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689743704663"}]},"ts":"1689743704663"} 2023-07-19 05:15:04,665 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=121, ppid=120, state=RUNNABLE; CloseRegionProcedure abc282e5f7835310e284bab60e5bb44a, server=jenkins-hbase4.apache.org,45681,1689743683028}] 2023-07-19 05:15:04,818 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close abc282e5f7835310e284bab60e5bb44a 2023-07-19 05:15:04,819 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing abc282e5f7835310e284bab60e5bb44a, disabling compactions & flushes 2023-07-19 05:15:04,819 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region testRename,,1689743704039.abc282e5f7835310e284bab60e5bb44a. 2023-07-19 05:15:04,819 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on testRename,,1689743704039.abc282e5f7835310e284bab60e5bb44a. 2023-07-19 05:15:04,819 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on testRename,,1689743704039.abc282e5f7835310e284bab60e5bb44a. after waiting 0 ms 2023-07-19 05:15:04,819 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region testRename,,1689743704039.abc282e5f7835310e284bab60e5bb44a. 2023-07-19 05:15:04,824 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/data/default/testRename/abc282e5f7835310e284bab60e5bb44a/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-19 05:15:04,825 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed testRename,,1689743704039.abc282e5f7835310e284bab60e5bb44a. 2023-07-19 05:15:04,825 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for abc282e5f7835310e284bab60e5bb44a: 2023-07-19 05:15:04,825 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding abc282e5f7835310e284bab60e5bb44a move to jenkins-hbase4.apache.org,41979,1689743683435 record at close sequenceid=2 2023-07-19 05:15:04,831 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed abc282e5f7835310e284bab60e5bb44a 2023-07-19 05:15:04,831 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=120 updating hbase:meta row=abc282e5f7835310e284bab60e5bb44a, regionState=CLOSED 2023-07-19 05:15:04,831 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"testRename,,1689743704039.abc282e5f7835310e284bab60e5bb44a.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689743704831"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689743704831"}]},"ts":"1689743704831"} 2023-07-19 05:15:04,835 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=121, resume processing ppid=120 2023-07-19 05:15:04,835 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=121, ppid=120, state=SUCCESS; CloseRegionProcedure abc282e5f7835310e284bab60e5bb44a, server=jenkins-hbase4.apache.org,45681,1689743683028 in 168 msec 2023-07-19 05:15:04,835 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=120, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=testRename, region=abc282e5f7835310e284bab60e5bb44a, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,41979,1689743683435; forceNewPlan=false, retain=false 2023-07-19 05:15:04,986 INFO [jenkins-hbase4:35853] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-19 05:15:04,986 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=120 updating hbase:meta row=abc282e5f7835310e284bab60e5bb44a, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,41979,1689743683435 2023-07-19 05:15:04,986 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"testRename,,1689743704039.abc282e5f7835310e284bab60e5bb44a.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689743704986"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689743704986"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689743704986"}]},"ts":"1689743704986"} 2023-07-19 05:15:04,988 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=122, ppid=120, state=RUNNABLE; OpenRegionProcedure abc282e5f7835310e284bab60e5bb44a, server=jenkins-hbase4.apache.org,41979,1689743683435}] 2023-07-19 05:15:05,145 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open testRename,,1689743704039.abc282e5f7835310e284bab60e5bb44a. 2023-07-19 05:15:05,145 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => abc282e5f7835310e284bab60e5bb44a, NAME => 'testRename,,1689743704039.abc282e5f7835310e284bab60e5bb44a.', STARTKEY => '', ENDKEY => ''} 2023-07-19 05:15:05,145 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table testRename abc282e5f7835310e284bab60e5bb44a 2023-07-19 05:15:05,146 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated testRename,,1689743704039.abc282e5f7835310e284bab60e5bb44a.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-19 05:15:05,146 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for abc282e5f7835310e284bab60e5bb44a 2023-07-19 05:15:05,146 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for abc282e5f7835310e284bab60e5bb44a 2023-07-19 05:15:05,147 INFO [StoreOpener-abc282e5f7835310e284bab60e5bb44a-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family tr of region abc282e5f7835310e284bab60e5bb44a 2023-07-19 05:15:05,148 DEBUG [StoreOpener-abc282e5f7835310e284bab60e5bb44a-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/data/default/testRename/abc282e5f7835310e284bab60e5bb44a/tr 2023-07-19 05:15:05,148 DEBUG [StoreOpener-abc282e5f7835310e284bab60e5bb44a-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/data/default/testRename/abc282e5f7835310e284bab60e5bb44a/tr 2023-07-19 05:15:05,149 INFO [StoreOpener-abc282e5f7835310e284bab60e5bb44a-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region abc282e5f7835310e284bab60e5bb44a columnFamilyName tr 2023-07-19 05:15:05,149 INFO [StoreOpener-abc282e5f7835310e284bab60e5bb44a-1] regionserver.HStore(310): Store=abc282e5f7835310e284bab60e5bb44a/tr, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-19 05:15:05,150 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/data/default/testRename/abc282e5f7835310e284bab60e5bb44a 2023-07-19 05:15:05,152 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/data/default/testRename/abc282e5f7835310e284bab60e5bb44a 2023-07-19 05:15:05,157 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for abc282e5f7835310e284bab60e5bb44a 2023-07-19 05:15:05,159 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened abc282e5f7835310e284bab60e5bb44a; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9774971840, jitterRate=-0.08963480591773987}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-19 05:15:05,159 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for abc282e5f7835310e284bab60e5bb44a: 2023-07-19 05:15:05,161 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for testRename,,1689743704039.abc282e5f7835310e284bab60e5bb44a., pid=122, masterSystemTime=1689743705141 2023-07-19 05:15:05,164 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for testRename,,1689743704039.abc282e5f7835310e284bab60e5bb44a. 2023-07-19 05:15:05,164 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened testRename,,1689743704039.abc282e5f7835310e284bab60e5bb44a. 2023-07-19 05:15:05,165 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=120 updating hbase:meta row=abc282e5f7835310e284bab60e5bb44a, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,41979,1689743683435 2023-07-19 05:15:05,165 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"testRename,,1689743704039.abc282e5f7835310e284bab60e5bb44a.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689743705164"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689743705164"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689743705164"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689743705164"}]},"ts":"1689743705164"} 2023-07-19 05:15:05,169 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=122, resume processing ppid=120 2023-07-19 05:15:05,169 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=122, ppid=120, state=SUCCESS; OpenRegionProcedure abc282e5f7835310e284bab60e5bb44a, server=jenkins-hbase4.apache.org,41979,1689743683435 in 178 msec 2023-07-19 05:15:05,171 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=120, state=SUCCESS; TransitRegionStateProcedure table=testRename, region=abc282e5f7835310e284bab60e5bb44a, REOPEN/MOVE in 507 msec 2023-07-19 05:15:05,663 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] procedure.ProcedureSyncWait(216): waitFor pid=120 2023-07-19 05:15:05,663 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminServer(369): All regions from table(s) [testRename] moved to target group oldgroup. 2023-07-19 05:15:05,663 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-19 05:15:05,667 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-19 05:15:05,667 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-19 05:15:05,669 INFO [Listener at localhost/38799] hbase.Waiter(180): Waiting up to [1,000] milli-secs(wait.for.ratio=[1]) 2023-07-19 05:15:05,670 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=testRename 2023-07-19 05:15:05,670 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-19 05:15:05,671 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=oldgroup 2023-07-19 05:15:05,671 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-19 05:15:05,672 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=testRename 2023-07-19 05:15:05,672 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-19 05:15:05,673 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-19 05:15:05,673 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-19 05:15:05,674 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup normal 2023-07-19 05:15:05,676 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-19 05:15:05,677 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/normal 2023-07-19 05:15:05,678 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 05:15:05,679 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-19 05:15:05,679 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-19 05:15:05,681 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-19 05:15:05,683 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-19 05:15:05,684 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-19 05:15:05,686 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:43237] to rsgroup normal 2023-07-19 05:15:05,688 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-19 05:15:05,688 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/normal 2023-07-19 05:15:05,689 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 05:15:05,689 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-19 05:15:05,689 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-19 05:15:05,696 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group default, current retry=0 2023-07-19 05:15:05,696 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,43237,1689743687175] are moved back to default 2023-07-19 05:15:05,696 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminServer(438): Move servers done: default => normal 2023-07-19 05:15:05,696 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-19 05:15:05,699 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-19 05:15:05,699 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-19 05:15:05,701 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=normal 2023-07-19 05:15:05,701 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-19 05:15:05,703 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'unmovedTable', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'ut', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-19 05:15:05,704 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] procedure2.ProcedureExecutor(1029): Stored pid=123, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=unmovedTable 2023-07-19 05:15:05,706 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=123, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=unmovedTable execute state=CREATE_TABLE_PRE_OPERATION 2023-07-19 05:15:05,706 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "unmovedTable" procId is: 123 2023-07-19 05:15:05,707 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.MasterRpcServices(1230): Checking to see if procedure is done pid=123 2023-07-19 05:15:05,708 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-19 05:15:05,708 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/normal 2023-07-19 05:15:05,709 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 05:15:05,709 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-19 05:15:05,709 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-19 05:15:05,711 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=123, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=unmovedTable execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-19 05:15:05,713 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/.tmp/data/default/unmovedTable/c29fa493d7d1fcfaa46cb25b42ce4170 2023-07-19 05:15:05,713 DEBUG [HFileArchiver-4] backup.HFileArchiver(153): Directory hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/.tmp/data/default/unmovedTable/c29fa493d7d1fcfaa46cb25b42ce4170 empty. 2023-07-19 05:15:05,714 DEBUG [HFileArchiver-4] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/.tmp/data/default/unmovedTable/c29fa493d7d1fcfaa46cb25b42ce4170 2023-07-19 05:15:05,714 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(328): Archived unmovedTable regions 2023-07-19 05:15:05,727 DEBUG [PEWorker-2] util.FSTableDescriptors(570): Wrote into hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/.tmp/data/default/unmovedTable/.tabledesc/.tableinfo.0000000001 2023-07-19 05:15:05,728 INFO [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(7675): creating {ENCODED => c29fa493d7d1fcfaa46cb25b42ce4170, NAME => 'unmovedTable,,1689743705703.c29fa493d7d1fcfaa46cb25b42ce4170.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='unmovedTable', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'ut', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/.tmp 2023-07-19 05:15:05,738 DEBUG [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(866): Instantiated unmovedTable,,1689743705703.c29fa493d7d1fcfaa46cb25b42ce4170.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-19 05:15:05,738 DEBUG [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(1604): Closing c29fa493d7d1fcfaa46cb25b42ce4170, disabling compactions & flushes 2023-07-19 05:15:05,738 INFO [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(1626): Closing region unmovedTable,,1689743705703.c29fa493d7d1fcfaa46cb25b42ce4170. 2023-07-19 05:15:05,738 DEBUG [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on unmovedTable,,1689743705703.c29fa493d7d1fcfaa46cb25b42ce4170. 2023-07-19 05:15:05,738 DEBUG [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(1714): Acquired close lock on unmovedTable,,1689743705703.c29fa493d7d1fcfaa46cb25b42ce4170. after waiting 0 ms 2023-07-19 05:15:05,738 DEBUG [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(1724): Updates disabled for region unmovedTable,,1689743705703.c29fa493d7d1fcfaa46cb25b42ce4170. 2023-07-19 05:15:05,738 INFO [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(1838): Closed unmovedTable,,1689743705703.c29fa493d7d1fcfaa46cb25b42ce4170. 2023-07-19 05:15:05,738 DEBUG [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(1558): Region close journal for c29fa493d7d1fcfaa46cb25b42ce4170: 2023-07-19 05:15:05,741 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=123, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=unmovedTable execute state=CREATE_TABLE_ADD_TO_META 2023-07-19 05:15:05,742 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"unmovedTable,,1689743705703.c29fa493d7d1fcfaa46cb25b42ce4170.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689743705741"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689743705741"}]},"ts":"1689743705741"} 2023-07-19 05:15:05,743 INFO [PEWorker-2] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-19 05:15:05,745 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=123, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=unmovedTable execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-19 05:15:05,745 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"unmovedTable","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689743705745"}]},"ts":"1689743705745"} 2023-07-19 05:15:05,746 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=unmovedTable, state=ENABLING in hbase:meta 2023-07-19 05:15:05,749 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=124, ppid=123, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=unmovedTable, region=c29fa493d7d1fcfaa46cb25b42ce4170, ASSIGN}] 2023-07-19 05:15:05,751 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=124, ppid=123, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=unmovedTable, region=c29fa493d7d1fcfaa46cb25b42ce4170, ASSIGN 2023-07-19 05:15:05,751 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=124, ppid=123, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=unmovedTable, region=c29fa493d7d1fcfaa46cb25b42ce4170, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,45681,1689743683028; forceNewPlan=false, retain=false 2023-07-19 05:15:05,808 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.MasterRpcServices(1230): Checking to see if procedure is done pid=123 2023-07-19 05:15:05,903 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=124 updating hbase:meta row=c29fa493d7d1fcfaa46cb25b42ce4170, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,45681,1689743683028 2023-07-19 05:15:05,903 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"unmovedTable,,1689743705703.c29fa493d7d1fcfaa46cb25b42ce4170.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689743705903"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689743705903"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689743705903"}]},"ts":"1689743705903"} 2023-07-19 05:15:05,905 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=125, ppid=124, state=RUNNABLE; OpenRegionProcedure c29fa493d7d1fcfaa46cb25b42ce4170, server=jenkins-hbase4.apache.org,45681,1689743683028}] 2023-07-19 05:15:06,009 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.MasterRpcServices(1230): Checking to see if procedure is done pid=123 2023-07-19 05:15:06,060 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open unmovedTable,,1689743705703.c29fa493d7d1fcfaa46cb25b42ce4170. 2023-07-19 05:15:06,060 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => c29fa493d7d1fcfaa46cb25b42ce4170, NAME => 'unmovedTable,,1689743705703.c29fa493d7d1fcfaa46cb25b42ce4170.', STARTKEY => '', ENDKEY => ''} 2023-07-19 05:15:06,061 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table unmovedTable c29fa493d7d1fcfaa46cb25b42ce4170 2023-07-19 05:15:06,061 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated unmovedTable,,1689743705703.c29fa493d7d1fcfaa46cb25b42ce4170.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-19 05:15:06,061 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for c29fa493d7d1fcfaa46cb25b42ce4170 2023-07-19 05:15:06,061 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for c29fa493d7d1fcfaa46cb25b42ce4170 2023-07-19 05:15:06,062 INFO [StoreOpener-c29fa493d7d1fcfaa46cb25b42ce4170-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family ut of region c29fa493d7d1fcfaa46cb25b42ce4170 2023-07-19 05:15:06,063 DEBUG [StoreOpener-c29fa493d7d1fcfaa46cb25b42ce4170-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/data/default/unmovedTable/c29fa493d7d1fcfaa46cb25b42ce4170/ut 2023-07-19 05:15:06,063 DEBUG [StoreOpener-c29fa493d7d1fcfaa46cb25b42ce4170-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/data/default/unmovedTable/c29fa493d7d1fcfaa46cb25b42ce4170/ut 2023-07-19 05:15:06,064 INFO [StoreOpener-c29fa493d7d1fcfaa46cb25b42ce4170-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region c29fa493d7d1fcfaa46cb25b42ce4170 columnFamilyName ut 2023-07-19 05:15:06,064 INFO [StoreOpener-c29fa493d7d1fcfaa46cb25b42ce4170-1] regionserver.HStore(310): Store=c29fa493d7d1fcfaa46cb25b42ce4170/ut, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-19 05:15:06,065 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/data/default/unmovedTable/c29fa493d7d1fcfaa46cb25b42ce4170 2023-07-19 05:15:06,066 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/data/default/unmovedTable/c29fa493d7d1fcfaa46cb25b42ce4170 2023-07-19 05:15:06,068 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for c29fa493d7d1fcfaa46cb25b42ce4170 2023-07-19 05:15:06,070 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/data/default/unmovedTable/c29fa493d7d1fcfaa46cb25b42ce4170/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-19 05:15:06,070 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened c29fa493d7d1fcfaa46cb25b42ce4170; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9670237760, jitterRate=-0.09938892722129822}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-19 05:15:06,070 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for c29fa493d7d1fcfaa46cb25b42ce4170: 2023-07-19 05:15:06,071 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for unmovedTable,,1689743705703.c29fa493d7d1fcfaa46cb25b42ce4170., pid=125, masterSystemTime=1689743706056 2023-07-19 05:15:06,072 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for unmovedTable,,1689743705703.c29fa493d7d1fcfaa46cb25b42ce4170. 2023-07-19 05:15:06,072 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened unmovedTable,,1689743705703.c29fa493d7d1fcfaa46cb25b42ce4170. 2023-07-19 05:15:06,073 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=124 updating hbase:meta row=c29fa493d7d1fcfaa46cb25b42ce4170, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,45681,1689743683028 2023-07-19 05:15:06,073 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"unmovedTable,,1689743705703.c29fa493d7d1fcfaa46cb25b42ce4170.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689743706073"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689743706073"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689743706073"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689743706073"}]},"ts":"1689743706073"} 2023-07-19 05:15:06,075 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=125, resume processing ppid=124 2023-07-19 05:15:06,076 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=125, ppid=124, state=SUCCESS; OpenRegionProcedure c29fa493d7d1fcfaa46cb25b42ce4170, server=jenkins-hbase4.apache.org,45681,1689743683028 in 169 msec 2023-07-19 05:15:06,077 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=124, resume processing ppid=123 2023-07-19 05:15:06,077 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=124, ppid=123, state=SUCCESS; TransitRegionStateProcedure table=unmovedTable, region=c29fa493d7d1fcfaa46cb25b42ce4170, ASSIGN in 327 msec 2023-07-19 05:15:06,078 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=123, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=unmovedTable execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-19 05:15:06,078 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"unmovedTable","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689743706078"}]},"ts":"1689743706078"} 2023-07-19 05:15:06,079 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=unmovedTable, state=ENABLED in hbase:meta 2023-07-19 05:15:06,086 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=123, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=unmovedTable execute state=CREATE_TABLE_POST_OPERATION 2023-07-19 05:15:06,087 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=123, state=SUCCESS; CreateTableProcedure table=unmovedTable in 383 msec 2023-07-19 05:15:06,310 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.MasterRpcServices(1230): Checking to see if procedure is done pid=123 2023-07-19 05:15:06,311 INFO [Listener at localhost/38799] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:unmovedTable, procId: 123 completed 2023-07-19 05:15:06,311 DEBUG [Listener at localhost/38799] hbase.HBaseTestingUtility(3430): Waiting until all regions of table unmovedTable get assigned. Timeout = 60000ms 2023-07-19 05:15:06,311 INFO [Listener at localhost/38799] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-19 05:15:06,316 INFO [Listener at localhost/38799] hbase.HBaseTestingUtility(3484): All regions for table unmovedTable assigned to meta. Checking AM states. 2023-07-19 05:15:06,316 INFO [Listener at localhost/38799] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-19 05:15:06,316 INFO [Listener at localhost/38799] hbase.HBaseTestingUtility(3504): All regions for table unmovedTable assigned. 2023-07-19 05:15:06,318 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [unmovedTable] to rsgroup normal 2023-07-19 05:15:06,320 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-19 05:15:06,321 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/normal 2023-07-19 05:15:06,321 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 05:15:06,322 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-19 05:15:06,322 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-19 05:15:06,324 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminServer(339): Moving region(s) for table unmovedTable to RSGroup normal 2023-07-19 05:15:06,324 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminServer(345): Moving region c29fa493d7d1fcfaa46cb25b42ce4170 to RSGroup normal 2023-07-19 05:15:06,325 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] procedure2.ProcedureExecutor(1029): Stored pid=126, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=unmovedTable, region=c29fa493d7d1fcfaa46cb25b42ce4170, REOPEN/MOVE 2023-07-19 05:15:06,325 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminServer(286): Moving 1 region(s) to group normal, current retry=0 2023-07-19 05:15:06,325 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=126, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=unmovedTable, region=c29fa493d7d1fcfaa46cb25b42ce4170, REOPEN/MOVE 2023-07-19 05:15:06,326 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=126 updating hbase:meta row=c29fa493d7d1fcfaa46cb25b42ce4170, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,45681,1689743683028 2023-07-19 05:15:06,326 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"unmovedTable,,1689743705703.c29fa493d7d1fcfaa46cb25b42ce4170.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689743706326"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689743706326"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689743706326"}]},"ts":"1689743706326"} 2023-07-19 05:15:06,327 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=127, ppid=126, state=RUNNABLE; CloseRegionProcedure c29fa493d7d1fcfaa46cb25b42ce4170, server=jenkins-hbase4.apache.org,45681,1689743683028}] 2023-07-19 05:15:06,480 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close c29fa493d7d1fcfaa46cb25b42ce4170 2023-07-19 05:15:06,482 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing c29fa493d7d1fcfaa46cb25b42ce4170, disabling compactions & flushes 2023-07-19 05:15:06,482 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region unmovedTable,,1689743705703.c29fa493d7d1fcfaa46cb25b42ce4170. 2023-07-19 05:15:06,482 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on unmovedTable,,1689743705703.c29fa493d7d1fcfaa46cb25b42ce4170. 2023-07-19 05:15:06,482 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on unmovedTable,,1689743705703.c29fa493d7d1fcfaa46cb25b42ce4170. after waiting 0 ms 2023-07-19 05:15:06,482 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region unmovedTable,,1689743705703.c29fa493d7d1fcfaa46cb25b42ce4170. 2023-07-19 05:15:06,485 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/data/default/unmovedTable/c29fa493d7d1fcfaa46cb25b42ce4170/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-19 05:15:06,486 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed unmovedTable,,1689743705703.c29fa493d7d1fcfaa46cb25b42ce4170. 2023-07-19 05:15:06,486 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for c29fa493d7d1fcfaa46cb25b42ce4170: 2023-07-19 05:15:06,486 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding c29fa493d7d1fcfaa46cb25b42ce4170 move to jenkins-hbase4.apache.org,43237,1689743687175 record at close sequenceid=2 2023-07-19 05:15:06,487 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed c29fa493d7d1fcfaa46cb25b42ce4170 2023-07-19 05:15:06,488 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=126 updating hbase:meta row=c29fa493d7d1fcfaa46cb25b42ce4170, regionState=CLOSED 2023-07-19 05:15:06,488 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"unmovedTable,,1689743705703.c29fa493d7d1fcfaa46cb25b42ce4170.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689743706488"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689743706488"}]},"ts":"1689743706488"} 2023-07-19 05:15:06,491 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=127, resume processing ppid=126 2023-07-19 05:15:06,491 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=127, ppid=126, state=SUCCESS; CloseRegionProcedure c29fa493d7d1fcfaa46cb25b42ce4170, server=jenkins-hbase4.apache.org,45681,1689743683028 in 162 msec 2023-07-19 05:15:06,491 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=126, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=unmovedTable, region=c29fa493d7d1fcfaa46cb25b42ce4170, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,43237,1689743687175; forceNewPlan=false, retain=false 2023-07-19 05:15:06,642 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=126 updating hbase:meta row=c29fa493d7d1fcfaa46cb25b42ce4170, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,43237,1689743687175 2023-07-19 05:15:06,642 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"unmovedTable,,1689743705703.c29fa493d7d1fcfaa46cb25b42ce4170.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689743706642"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689743706642"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689743706642"}]},"ts":"1689743706642"} 2023-07-19 05:15:06,643 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=128, ppid=126, state=RUNNABLE; OpenRegionProcedure c29fa493d7d1fcfaa46cb25b42ce4170, server=jenkins-hbase4.apache.org,43237,1689743687175}] 2023-07-19 05:15:06,799 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open unmovedTable,,1689743705703.c29fa493d7d1fcfaa46cb25b42ce4170. 2023-07-19 05:15:06,800 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => c29fa493d7d1fcfaa46cb25b42ce4170, NAME => 'unmovedTable,,1689743705703.c29fa493d7d1fcfaa46cb25b42ce4170.', STARTKEY => '', ENDKEY => ''} 2023-07-19 05:15:06,800 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table unmovedTable c29fa493d7d1fcfaa46cb25b42ce4170 2023-07-19 05:15:06,800 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated unmovedTable,,1689743705703.c29fa493d7d1fcfaa46cb25b42ce4170.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-19 05:15:06,800 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for c29fa493d7d1fcfaa46cb25b42ce4170 2023-07-19 05:15:06,800 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for c29fa493d7d1fcfaa46cb25b42ce4170 2023-07-19 05:15:06,802 INFO [StoreOpener-c29fa493d7d1fcfaa46cb25b42ce4170-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family ut of region c29fa493d7d1fcfaa46cb25b42ce4170 2023-07-19 05:15:06,803 DEBUG [StoreOpener-c29fa493d7d1fcfaa46cb25b42ce4170-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/data/default/unmovedTable/c29fa493d7d1fcfaa46cb25b42ce4170/ut 2023-07-19 05:15:06,803 DEBUG [StoreOpener-c29fa493d7d1fcfaa46cb25b42ce4170-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/data/default/unmovedTable/c29fa493d7d1fcfaa46cb25b42ce4170/ut 2023-07-19 05:15:06,804 INFO [StoreOpener-c29fa493d7d1fcfaa46cb25b42ce4170-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region c29fa493d7d1fcfaa46cb25b42ce4170 columnFamilyName ut 2023-07-19 05:15:06,805 INFO [StoreOpener-c29fa493d7d1fcfaa46cb25b42ce4170-1] regionserver.HStore(310): Store=c29fa493d7d1fcfaa46cb25b42ce4170/ut, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-19 05:15:06,805 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/data/default/unmovedTable/c29fa493d7d1fcfaa46cb25b42ce4170 2023-07-19 05:15:06,807 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/data/default/unmovedTable/c29fa493d7d1fcfaa46cb25b42ce4170 2023-07-19 05:15:06,810 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for c29fa493d7d1fcfaa46cb25b42ce4170 2023-07-19 05:15:06,811 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened c29fa493d7d1fcfaa46cb25b42ce4170; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10082905440, jitterRate=-0.06095625460147858}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-19 05:15:06,811 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for c29fa493d7d1fcfaa46cb25b42ce4170: 2023-07-19 05:15:06,812 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for unmovedTable,,1689743705703.c29fa493d7d1fcfaa46cb25b42ce4170., pid=128, masterSystemTime=1689743706795 2023-07-19 05:15:06,815 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for unmovedTable,,1689743705703.c29fa493d7d1fcfaa46cb25b42ce4170. 2023-07-19 05:15:06,815 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened unmovedTable,,1689743705703.c29fa493d7d1fcfaa46cb25b42ce4170. 2023-07-19 05:15:06,816 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=126 updating hbase:meta row=c29fa493d7d1fcfaa46cb25b42ce4170, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,43237,1689743687175 2023-07-19 05:15:06,816 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"unmovedTable,,1689743705703.c29fa493d7d1fcfaa46cb25b42ce4170.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689743706816"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689743706816"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689743706816"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689743706816"}]},"ts":"1689743706816"} 2023-07-19 05:15:06,819 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=128, resume processing ppid=126 2023-07-19 05:15:06,819 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=128, ppid=126, state=SUCCESS; OpenRegionProcedure c29fa493d7d1fcfaa46cb25b42ce4170, server=jenkins-hbase4.apache.org,43237,1689743687175 in 174 msec 2023-07-19 05:15:06,820 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=126, state=SUCCESS; TransitRegionStateProcedure table=unmovedTable, region=c29fa493d7d1fcfaa46cb25b42ce4170, REOPEN/MOVE in 495 msec 2023-07-19 05:15:07,043 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-hbase.properties,hadoop-metrics2.properties 2023-07-19 05:15:07,325 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] procedure.ProcedureSyncWait(216): waitFor pid=126 2023-07-19 05:15:07,326 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminServer(369): All regions from table(s) [unmovedTable] moved to target group normal. 2023-07-19 05:15:07,326 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-19 05:15:07,351 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-19 05:15:07,351 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-19 05:15:07,353 INFO [Listener at localhost/38799] hbase.Waiter(180): Waiting up to [1,000] milli-secs(wait.for.ratio=[1]) 2023-07-19 05:15:07,354 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=unmovedTable 2023-07-19 05:15:07,355 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-19 05:15:07,355 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=normal 2023-07-19 05:15:07,355 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-19 05:15:07,356 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=unmovedTable 2023-07-19 05:15:07,356 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-19 05:15:07,357 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(408): Client=jenkins//172.31.14.131 rename rsgroup from oldgroup to newgroup 2023-07-19 05:15:07,360 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/normal 2023-07-19 05:15:07,360 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 05:15:07,360 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-19 05:15:07,361 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/newgroup 2023-07-19 05:15:07,362 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 9 2023-07-19 05:15:07,364 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RenameRSGroup 2023-07-19 05:15:07,367 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-19 05:15:07,367 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-19 05:15:07,369 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=newgroup 2023-07-19 05:15:07,369 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-19 05:15:07,370 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=testRename 2023-07-19 05:15:07,370 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-19 05:15:07,371 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=unmovedTable 2023-07-19 05:15:07,371 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-19 05:15:07,375 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-19 05:15:07,375 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-19 05:15:07,377 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [unmovedTable] to rsgroup default 2023-07-19 05:15:07,379 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/normal 2023-07-19 05:15:07,379 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 05:15:07,380 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-19 05:15:07,380 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/newgroup 2023-07-19 05:15:07,380 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-19 05:15:07,385 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminServer(339): Moving region(s) for table unmovedTable to RSGroup default 2023-07-19 05:15:07,386 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminServer(345): Moving region c29fa493d7d1fcfaa46cb25b42ce4170 to RSGroup default 2023-07-19 05:15:07,386 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] procedure2.ProcedureExecutor(1029): Stored pid=129, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=unmovedTable, region=c29fa493d7d1fcfaa46cb25b42ce4170, REOPEN/MOVE 2023-07-19 05:15:07,386 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminServer(286): Moving 1 region(s) to group default, current retry=0 2023-07-19 05:15:07,386 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=129, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=unmovedTable, region=c29fa493d7d1fcfaa46cb25b42ce4170, REOPEN/MOVE 2023-07-19 05:15:07,387 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=129 updating hbase:meta row=c29fa493d7d1fcfaa46cb25b42ce4170, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,43237,1689743687175 2023-07-19 05:15:07,387 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"unmovedTable,,1689743705703.c29fa493d7d1fcfaa46cb25b42ce4170.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689743707387"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689743707387"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689743707387"}]},"ts":"1689743707387"} 2023-07-19 05:15:07,388 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=130, ppid=129, state=RUNNABLE; CloseRegionProcedure c29fa493d7d1fcfaa46cb25b42ce4170, server=jenkins-hbase4.apache.org,43237,1689743687175}] 2023-07-19 05:15:07,541 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close c29fa493d7d1fcfaa46cb25b42ce4170 2023-07-19 05:15:07,543 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing c29fa493d7d1fcfaa46cb25b42ce4170, disabling compactions & flushes 2023-07-19 05:15:07,543 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region unmovedTable,,1689743705703.c29fa493d7d1fcfaa46cb25b42ce4170. 2023-07-19 05:15:07,543 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on unmovedTable,,1689743705703.c29fa493d7d1fcfaa46cb25b42ce4170. 2023-07-19 05:15:07,543 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on unmovedTable,,1689743705703.c29fa493d7d1fcfaa46cb25b42ce4170. after waiting 0 ms 2023-07-19 05:15:07,543 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region unmovedTable,,1689743705703.c29fa493d7d1fcfaa46cb25b42ce4170. 2023-07-19 05:15:07,546 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/data/default/unmovedTable/c29fa493d7d1fcfaa46cb25b42ce4170/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-19 05:15:07,548 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed unmovedTable,,1689743705703.c29fa493d7d1fcfaa46cb25b42ce4170. 2023-07-19 05:15:07,548 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for c29fa493d7d1fcfaa46cb25b42ce4170: 2023-07-19 05:15:07,548 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding c29fa493d7d1fcfaa46cb25b42ce4170 move to jenkins-hbase4.apache.org,45681,1689743683028 record at close sequenceid=5 2023-07-19 05:15:07,549 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed c29fa493d7d1fcfaa46cb25b42ce4170 2023-07-19 05:15:07,550 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=129 updating hbase:meta row=c29fa493d7d1fcfaa46cb25b42ce4170, regionState=CLOSED 2023-07-19 05:15:07,550 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"unmovedTable,,1689743705703.c29fa493d7d1fcfaa46cb25b42ce4170.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689743707550"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689743707550"}]},"ts":"1689743707550"} 2023-07-19 05:15:07,553 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=130, resume processing ppid=129 2023-07-19 05:15:07,553 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=130, ppid=129, state=SUCCESS; CloseRegionProcedure c29fa493d7d1fcfaa46cb25b42ce4170, server=jenkins-hbase4.apache.org,43237,1689743687175 in 163 msec 2023-07-19 05:15:07,553 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=129, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=unmovedTable, region=c29fa493d7d1fcfaa46cb25b42ce4170, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,45681,1689743683028; forceNewPlan=false, retain=false 2023-07-19 05:15:07,704 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=129 updating hbase:meta row=c29fa493d7d1fcfaa46cb25b42ce4170, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,45681,1689743683028 2023-07-19 05:15:07,704 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"unmovedTable,,1689743705703.c29fa493d7d1fcfaa46cb25b42ce4170.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689743707704"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689743707704"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689743707704"}]},"ts":"1689743707704"} 2023-07-19 05:15:07,706 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=131, ppid=129, state=RUNNABLE; OpenRegionProcedure c29fa493d7d1fcfaa46cb25b42ce4170, server=jenkins-hbase4.apache.org,45681,1689743683028}] 2023-07-19 05:15:07,861 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open unmovedTable,,1689743705703.c29fa493d7d1fcfaa46cb25b42ce4170. 2023-07-19 05:15:07,861 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => c29fa493d7d1fcfaa46cb25b42ce4170, NAME => 'unmovedTable,,1689743705703.c29fa493d7d1fcfaa46cb25b42ce4170.', STARTKEY => '', ENDKEY => ''} 2023-07-19 05:15:07,861 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table unmovedTable c29fa493d7d1fcfaa46cb25b42ce4170 2023-07-19 05:15:07,861 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated unmovedTable,,1689743705703.c29fa493d7d1fcfaa46cb25b42ce4170.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-19 05:15:07,862 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for c29fa493d7d1fcfaa46cb25b42ce4170 2023-07-19 05:15:07,862 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for c29fa493d7d1fcfaa46cb25b42ce4170 2023-07-19 05:15:07,863 INFO [StoreOpener-c29fa493d7d1fcfaa46cb25b42ce4170-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family ut of region c29fa493d7d1fcfaa46cb25b42ce4170 2023-07-19 05:15:07,864 DEBUG [StoreOpener-c29fa493d7d1fcfaa46cb25b42ce4170-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/data/default/unmovedTable/c29fa493d7d1fcfaa46cb25b42ce4170/ut 2023-07-19 05:15:07,864 DEBUG [StoreOpener-c29fa493d7d1fcfaa46cb25b42ce4170-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/data/default/unmovedTable/c29fa493d7d1fcfaa46cb25b42ce4170/ut 2023-07-19 05:15:07,864 INFO [StoreOpener-c29fa493d7d1fcfaa46cb25b42ce4170-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region c29fa493d7d1fcfaa46cb25b42ce4170 columnFamilyName ut 2023-07-19 05:15:07,865 INFO [StoreOpener-c29fa493d7d1fcfaa46cb25b42ce4170-1] regionserver.HStore(310): Store=c29fa493d7d1fcfaa46cb25b42ce4170/ut, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-19 05:15:07,866 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/data/default/unmovedTable/c29fa493d7d1fcfaa46cb25b42ce4170 2023-07-19 05:15:07,867 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/data/default/unmovedTable/c29fa493d7d1fcfaa46cb25b42ce4170 2023-07-19 05:15:07,869 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for c29fa493d7d1fcfaa46cb25b42ce4170 2023-07-19 05:15:07,870 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened c29fa493d7d1fcfaa46cb25b42ce4170; next sequenceid=8; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10698598720, jitterRate=-0.003615349531173706}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-19 05:15:07,870 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for c29fa493d7d1fcfaa46cb25b42ce4170: 2023-07-19 05:15:07,871 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for unmovedTable,,1689743705703.c29fa493d7d1fcfaa46cb25b42ce4170., pid=131, masterSystemTime=1689743707857 2023-07-19 05:15:07,872 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for unmovedTable,,1689743705703.c29fa493d7d1fcfaa46cb25b42ce4170. 2023-07-19 05:15:07,872 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened unmovedTable,,1689743705703.c29fa493d7d1fcfaa46cb25b42ce4170. 2023-07-19 05:15:07,873 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=129 updating hbase:meta row=c29fa493d7d1fcfaa46cb25b42ce4170, regionState=OPEN, openSeqNum=8, regionLocation=jenkins-hbase4.apache.org,45681,1689743683028 2023-07-19 05:15:07,873 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"unmovedTable,,1689743705703.c29fa493d7d1fcfaa46cb25b42ce4170.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689743707873"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689743707873"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689743707873"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689743707873"}]},"ts":"1689743707873"} 2023-07-19 05:15:07,875 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=131, resume processing ppid=129 2023-07-19 05:15:07,875 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=131, ppid=129, state=SUCCESS; OpenRegionProcedure c29fa493d7d1fcfaa46cb25b42ce4170, server=jenkins-hbase4.apache.org,45681,1689743683028 in 168 msec 2023-07-19 05:15:07,876 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=129, state=SUCCESS; TransitRegionStateProcedure table=unmovedTable, region=c29fa493d7d1fcfaa46cb25b42ce4170, REOPEN/MOVE in 489 msec 2023-07-19 05:15:08,386 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] procedure.ProcedureSyncWait(216): waitFor pid=129 2023-07-19 05:15:08,387 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminServer(369): All regions from table(s) [unmovedTable] moved to target group default. 2023-07-19 05:15:08,387 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-19 05:15:08,388 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:43237] to rsgroup default 2023-07-19 05:15:08,393 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/normal 2023-07-19 05:15:08,393 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 05:15:08,394 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-19 05:15:08,394 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/newgroup 2023-07-19 05:15:08,395 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-19 05:15:08,396 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group normal, current retry=0 2023-07-19 05:15:08,396 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,43237,1689743687175] are moved back to normal 2023-07-19 05:15:08,396 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminServer(438): Move servers done: normal => default 2023-07-19 05:15:08,396 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-19 05:15:08,397 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup normal 2023-07-19 05:15:08,402 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 05:15:08,402 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-19 05:15:08,402 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/newgroup 2023-07-19 05:15:08,403 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 7 2023-07-19 05:15:08,404 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-19 05:15:08,405 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-19 05:15:08,405 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-19 05:15:08,405 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-19 05:15:08,406 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-19 05:15:08,406 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-19 05:15:08,406 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-19 05:15:08,410 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 05:15:08,410 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/newgroup 2023-07-19 05:15:08,410 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-19 05:15:08,411 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-19 05:15:08,413 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [testRename] to rsgroup default 2023-07-19 05:15:08,415 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 05:15:08,415 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/newgroup 2023-07-19 05:15:08,415 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-19 05:15:08,418 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminServer(339): Moving region(s) for table testRename to RSGroup default 2023-07-19 05:15:08,418 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminServer(345): Moving region abc282e5f7835310e284bab60e5bb44a to RSGroup default 2023-07-19 05:15:08,419 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] procedure2.ProcedureExecutor(1029): Stored pid=132, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=testRename, region=abc282e5f7835310e284bab60e5bb44a, REOPEN/MOVE 2023-07-19 05:15:08,419 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminServer(286): Moving 1 region(s) to group default, current retry=0 2023-07-19 05:15:08,419 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=132, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=testRename, region=abc282e5f7835310e284bab60e5bb44a, REOPEN/MOVE 2023-07-19 05:15:08,420 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=132 updating hbase:meta row=abc282e5f7835310e284bab60e5bb44a, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,41979,1689743683435 2023-07-19 05:15:08,420 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"testRename,,1689743704039.abc282e5f7835310e284bab60e5bb44a.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689743708420"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689743708420"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689743708420"}]},"ts":"1689743708420"} 2023-07-19 05:15:08,421 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=133, ppid=132, state=RUNNABLE; CloseRegionProcedure abc282e5f7835310e284bab60e5bb44a, server=jenkins-hbase4.apache.org,41979,1689743683435}] 2023-07-19 05:15:08,575 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close abc282e5f7835310e284bab60e5bb44a 2023-07-19 05:15:08,576 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing abc282e5f7835310e284bab60e5bb44a, disabling compactions & flushes 2023-07-19 05:15:08,576 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region testRename,,1689743704039.abc282e5f7835310e284bab60e5bb44a. 2023-07-19 05:15:08,576 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on testRename,,1689743704039.abc282e5f7835310e284bab60e5bb44a. 2023-07-19 05:15:08,576 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on testRename,,1689743704039.abc282e5f7835310e284bab60e5bb44a. after waiting 0 ms 2023-07-19 05:15:08,576 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region testRename,,1689743704039.abc282e5f7835310e284bab60e5bb44a. 2023-07-19 05:15:08,581 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/data/default/testRename/abc282e5f7835310e284bab60e5bb44a/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-19 05:15:08,582 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed testRename,,1689743704039.abc282e5f7835310e284bab60e5bb44a. 2023-07-19 05:15:08,582 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for abc282e5f7835310e284bab60e5bb44a: 2023-07-19 05:15:08,582 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding abc282e5f7835310e284bab60e5bb44a move to jenkins-hbase4.apache.org,43237,1689743687175 record at close sequenceid=5 2023-07-19 05:15:08,584 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed abc282e5f7835310e284bab60e5bb44a 2023-07-19 05:15:08,585 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=132 updating hbase:meta row=abc282e5f7835310e284bab60e5bb44a, regionState=CLOSED 2023-07-19 05:15:08,585 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"testRename,,1689743704039.abc282e5f7835310e284bab60e5bb44a.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689743708585"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689743708585"}]},"ts":"1689743708585"} 2023-07-19 05:15:08,589 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=133, resume processing ppid=132 2023-07-19 05:15:08,589 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=133, ppid=132, state=SUCCESS; CloseRegionProcedure abc282e5f7835310e284bab60e5bb44a, server=jenkins-hbase4.apache.org,41979,1689743683435 in 165 msec 2023-07-19 05:15:08,590 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=132, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=testRename, region=abc282e5f7835310e284bab60e5bb44a, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,43237,1689743687175; forceNewPlan=false, retain=false 2023-07-19 05:15:08,741 INFO [jenkins-hbase4:35853] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-19 05:15:08,741 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=132 updating hbase:meta row=abc282e5f7835310e284bab60e5bb44a, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,43237,1689743687175 2023-07-19 05:15:08,741 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"testRename,,1689743704039.abc282e5f7835310e284bab60e5bb44a.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689743708741"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689743708741"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689743708741"}]},"ts":"1689743708741"} 2023-07-19 05:15:08,743 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=134, ppid=132, state=RUNNABLE; OpenRegionProcedure abc282e5f7835310e284bab60e5bb44a, server=jenkins-hbase4.apache.org,43237,1689743687175}] 2023-07-19 05:15:08,899 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open testRename,,1689743704039.abc282e5f7835310e284bab60e5bb44a. 2023-07-19 05:15:08,899 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => abc282e5f7835310e284bab60e5bb44a, NAME => 'testRename,,1689743704039.abc282e5f7835310e284bab60e5bb44a.', STARTKEY => '', ENDKEY => ''} 2023-07-19 05:15:08,900 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table testRename abc282e5f7835310e284bab60e5bb44a 2023-07-19 05:15:08,900 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated testRename,,1689743704039.abc282e5f7835310e284bab60e5bb44a.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-19 05:15:08,900 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for abc282e5f7835310e284bab60e5bb44a 2023-07-19 05:15:08,900 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for abc282e5f7835310e284bab60e5bb44a 2023-07-19 05:15:08,903 INFO [StoreOpener-abc282e5f7835310e284bab60e5bb44a-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family tr of region abc282e5f7835310e284bab60e5bb44a 2023-07-19 05:15:08,904 DEBUG [StoreOpener-abc282e5f7835310e284bab60e5bb44a-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/data/default/testRename/abc282e5f7835310e284bab60e5bb44a/tr 2023-07-19 05:15:08,904 DEBUG [StoreOpener-abc282e5f7835310e284bab60e5bb44a-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/data/default/testRename/abc282e5f7835310e284bab60e5bb44a/tr 2023-07-19 05:15:08,904 INFO [StoreOpener-abc282e5f7835310e284bab60e5bb44a-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region abc282e5f7835310e284bab60e5bb44a columnFamilyName tr 2023-07-19 05:15:08,905 INFO [StoreOpener-abc282e5f7835310e284bab60e5bb44a-1] regionserver.HStore(310): Store=abc282e5f7835310e284bab60e5bb44a/tr, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-19 05:15:08,906 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/data/default/testRename/abc282e5f7835310e284bab60e5bb44a 2023-07-19 05:15:08,907 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/data/default/testRename/abc282e5f7835310e284bab60e5bb44a 2023-07-19 05:15:08,911 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for abc282e5f7835310e284bab60e5bb44a 2023-07-19 05:15:08,912 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened abc282e5f7835310e284bab60e5bb44a; next sequenceid=8; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11703715840, jitterRate=0.08999347686767578}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-19 05:15:08,912 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for abc282e5f7835310e284bab60e5bb44a: 2023-07-19 05:15:08,913 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for testRename,,1689743704039.abc282e5f7835310e284bab60e5bb44a., pid=134, masterSystemTime=1689743708895 2023-07-19 05:15:08,915 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for testRename,,1689743704039.abc282e5f7835310e284bab60e5bb44a. 2023-07-19 05:15:08,915 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened testRename,,1689743704039.abc282e5f7835310e284bab60e5bb44a. 2023-07-19 05:15:08,915 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=132 updating hbase:meta row=abc282e5f7835310e284bab60e5bb44a, regionState=OPEN, openSeqNum=8, regionLocation=jenkins-hbase4.apache.org,43237,1689743687175 2023-07-19 05:15:08,916 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"testRename,,1689743704039.abc282e5f7835310e284bab60e5bb44a.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689743708915"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689743708915"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689743708915"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689743708915"}]},"ts":"1689743708915"} 2023-07-19 05:15:08,919 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=134, resume processing ppid=132 2023-07-19 05:15:08,919 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=134, ppid=132, state=SUCCESS; OpenRegionProcedure abc282e5f7835310e284bab60e5bb44a, server=jenkins-hbase4.apache.org,43237,1689743687175 in 174 msec 2023-07-19 05:15:08,920 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=132, state=SUCCESS; TransitRegionStateProcedure table=testRename, region=abc282e5f7835310e284bab60e5bb44a, REOPEN/MOVE in 501 msec 2023-07-19 05:15:09,419 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] procedure.ProcedureSyncWait(216): waitFor pid=132 2023-07-19 05:15:09,420 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminServer(369): All regions from table(s) [testRename] moved to target group default. 2023-07-19 05:15:09,420 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-19 05:15:09,421 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:41899, jenkins-hbase4.apache.org:41979] to rsgroup default 2023-07-19 05:15:09,423 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 05:15:09,424 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/newgroup 2023-07-19 05:15:09,424 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-19 05:15:09,425 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group newgroup, current retry=0 2023-07-19 05:15:09,425 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,41899,1689743683228, jenkins-hbase4.apache.org,41979,1689743683435] are moved back to newgroup 2023-07-19 05:15:09,425 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminServer(438): Move servers done: newgroup => default 2023-07-19 05:15:09,425 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-19 05:15:09,426 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup newgroup 2023-07-19 05:15:09,429 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 05:15:09,429 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-19 05:15:09,431 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-19 05:15:09,433 INFO [Listener at localhost/38799] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-19 05:15:09,433 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-19 05:15:09,435 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 05:15:09,435 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-19 05:15:09,437 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-19 05:15:09,444 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-19 05:15:09,447 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-19 05:15:09,447 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-19 05:15:09,449 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:35853] to rsgroup master 2023-07-19 05:15:09,449 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:35853 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-19 05:15:09,449 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] ipc.CallRunner(144): callId: 764 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:46730 deadline: 1689744909449, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:35853 is either offline or it does not exist. 2023-07-19 05:15:09,449 WARN [Listener at localhost/38799] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:35853 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:35853 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-19 05:15:09,451 INFO [Listener at localhost/38799] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-19 05:15:09,452 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-19 05:15:09,452 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-19 05:15:09,452 INFO [Listener at localhost/38799] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:41899, jenkins-hbase4.apache.org:41979, jenkins-hbase4.apache.org:43237, jenkins-hbase4.apache.org:45681], Tables:[hbase:meta, hbase:namespace, unmovedTable, hbase:rsgroup, testRename], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-19 05:15:09,453 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-19 05:15:09,453 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-19 05:15:09,470 INFO [Listener at localhost/38799] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testRenameRSGroup Thread=507 (was 512), OpenFileDescriptor=780 (was 781), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=377 (was 409), ProcessCount=173 (was 173), AvailableMemoryMB=3064 (was 3125) 2023-07-19 05:15:09,470 WARN [Listener at localhost/38799] hbase.ResourceChecker(130): Thread=507 is superior to 500 2023-07-19 05:15:09,487 INFO [Listener at localhost/38799] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testBogusArgs Thread=507, OpenFileDescriptor=780, MaxFileDescriptor=60000, SystemLoadAverage=377, ProcessCount=173, AvailableMemoryMB=3063 2023-07-19 05:15:09,487 WARN [Listener at localhost/38799] hbase.ResourceChecker(130): Thread=507 is superior to 500 2023-07-19 05:15:09,487 INFO [Listener at localhost/38799] rsgroup.TestRSGroupsBase(132): testBogusArgs 2023-07-19 05:15:09,493 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-19 05:15:09,494 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-19 05:15:09,494 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-19 05:15:09,494 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-19 05:15:09,495 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-19 05:15:09,495 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-19 05:15:09,495 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-19 05:15:09,496 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-19 05:15:09,500 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 05:15:09,500 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-19 05:15:09,502 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-19 05:15:09,505 INFO [Listener at localhost/38799] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-19 05:15:09,505 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-19 05:15:09,507 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 05:15:09,508 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-19 05:15:09,509 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-19 05:15:09,511 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-19 05:15:09,513 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-19 05:15:09,514 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-19 05:15:09,515 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:35853] to rsgroup master 2023-07-19 05:15:09,515 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:35853 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-19 05:15:09,516 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] ipc.CallRunner(144): callId: 792 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:46730 deadline: 1689744909515, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:35853 is either offline or it does not exist. 2023-07-19 05:15:09,516 WARN [Listener at localhost/38799] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:35853 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:35853 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-19 05:15:09,517 INFO [Listener at localhost/38799] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-19 05:15:09,518 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-19 05:15:09,518 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-19 05:15:09,518 INFO [Listener at localhost/38799] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:41899, jenkins-hbase4.apache.org:41979, jenkins-hbase4.apache.org:43237, jenkins-hbase4.apache.org:45681], Tables:[hbase:meta, hbase:namespace, unmovedTable, hbase:rsgroup, testRename], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-19 05:15:09,519 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-19 05:15:09,519 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-19 05:15:09,520 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=nonexistent 2023-07-19 05:15:09,520 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-19 05:15:09,525 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(334): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, server=bogus:123 2023-07-19 05:15:09,525 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfServer 2023-07-19 05:15:09,526 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=bogus 2023-07-19 05:15:09,526 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-19 05:15:09,526 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup bogus 2023-07-19 05:15:09,527 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup bogus does not exist at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.removeRSGroup(RSGroupAdminServer.java:486) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.removeRSGroup(RSGroupAdminEndpoint.java:278) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16208) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-19 05:15:09,527 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] ipc.CallRunner(144): callId: 804 service: MasterService methodName: ExecMasterService size: 87 connection: 172.31.14.131:46730 deadline: 1689744909526, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup bogus does not exist 2023-07-19 05:15:09,528 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [bogus:123] to rsgroup bogus 2023-07-19 05:15:09,529 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup does not exist: bogus at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.getAndCheckRSGroupInfo(RSGroupAdminServer.java:115) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:398) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-19 05:15:09,529 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] ipc.CallRunner(144): callId: 807 service: MasterService methodName: ExecMasterService size: 96 connection: 172.31.14.131:46730 deadline: 1689744909528, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup does not exist: bogus 2023-07-19 05:15:09,531 DEBUG [Listener at localhost/38799-EventThread] zookeeper.ZKWatcher(600): master:35853-0x1017c00e52c0000, quorum=127.0.0.1:54772, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/balancer 2023-07-19 05:15:09,532 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.MasterRpcServices(492): Client=jenkins//172.31.14.131 set balanceSwitch=true 2023-07-19 05:15:09,537 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(292): Client=jenkins//172.31.14.131 balance rsgroup, group=bogus 2023-07-19 05:15:09,537 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup does not exist: bogus at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.balanceRSGroup(RSGroupAdminServer.java:523) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.balanceRSGroup(RSGroupAdminEndpoint.java:299) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16213) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-19 05:15:09,537 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] ipc.CallRunner(144): callId: 811 service: MasterService methodName: ExecMasterService size: 88 connection: 172.31.14.131:46730 deadline: 1689744909536, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup does not exist: bogus 2023-07-19 05:15:09,541 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-19 05:15:09,541 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-19 05:15:09,542 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-19 05:15:09,542 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-19 05:15:09,542 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-19 05:15:09,543 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-19 05:15:09,543 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-19 05:15:09,544 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-19 05:15:09,547 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 05:15:09,548 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-19 05:15:09,549 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-19 05:15:09,552 INFO [Listener at localhost/38799] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-19 05:15:09,552 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-19 05:15:09,554 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 05:15:09,554 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-19 05:15:09,556 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-19 05:15:09,557 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-19 05:15:09,559 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-19 05:15:09,559 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-19 05:15:09,561 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:35853] to rsgroup master 2023-07-19 05:15:09,564 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:35853 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-19 05:15:09,564 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] ipc.CallRunner(144): callId: 835 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:46730 deadline: 1689744909561, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:35853 is either offline or it does not exist. 2023-07-19 05:15:09,564 WARN [Listener at localhost/38799] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:35853 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:35853 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-19 05:15:09,565 INFO [Listener at localhost/38799] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-19 05:15:09,566 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-19 05:15:09,566 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-19 05:15:09,566 INFO [Listener at localhost/38799] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:41899, jenkins-hbase4.apache.org:41979, jenkins-hbase4.apache.org:43237, jenkins-hbase4.apache.org:45681], Tables:[hbase:meta, hbase:namespace, unmovedTable, hbase:rsgroup, testRename], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-19 05:15:09,567 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-19 05:15:09,567 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-19 05:15:09,584 INFO [Listener at localhost/38799] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testBogusArgs Thread=511 (was 507) Potentially hanging thread: hconnection-0x18ddab65-shared-pool-30 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x5b070797-shared-pool-23 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x5b070797-shared-pool-24 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x18ddab65-shared-pool-29 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=780 (was 780), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=377 (was 377), ProcessCount=173 (was 173), AvailableMemoryMB=3064 (was 3063) - AvailableMemoryMB LEAK? - 2023-07-19 05:15:09,584 WARN [Listener at localhost/38799] hbase.ResourceChecker(130): Thread=511 is superior to 500 2023-07-19 05:15:09,599 INFO [Listener at localhost/38799] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testDisabledTableMove Thread=511, OpenFileDescriptor=780, MaxFileDescriptor=60000, SystemLoadAverage=377, ProcessCount=173, AvailableMemoryMB=3064 2023-07-19 05:15:09,599 WARN [Listener at localhost/38799] hbase.ResourceChecker(130): Thread=511 is superior to 500 2023-07-19 05:15:09,599 INFO [Listener at localhost/38799] rsgroup.TestRSGroupsBase(132): testDisabledTableMove 2023-07-19 05:15:09,604 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-19 05:15:09,604 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-19 05:15:09,604 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-19 05:15:09,604 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-19 05:15:09,605 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-19 05:15:09,605 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-19 05:15:09,605 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-19 05:15:09,606 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-19 05:15:09,609 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 05:15:09,609 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-19 05:15:09,612 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-19 05:15:09,614 INFO [Listener at localhost/38799] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-19 05:15:09,614 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-19 05:15:09,616 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 05:15:09,616 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-19 05:15:09,618 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-19 05:15:09,619 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-19 05:15:09,621 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-19 05:15:09,621 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-19 05:15:09,623 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:35853] to rsgroup master 2023-07-19 05:15:09,623 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:35853 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-19 05:15:09,623 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] ipc.CallRunner(144): callId: 863 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:46730 deadline: 1689744909623, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:35853 is either offline or it does not exist. 2023-07-19 05:15:09,623 WARN [Listener at localhost/38799] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:35853 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:35853 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-19 05:15:09,625 INFO [Listener at localhost/38799] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-19 05:15:09,625 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-19 05:15:09,625 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-19 05:15:09,625 INFO [Listener at localhost/38799] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:41899, jenkins-hbase4.apache.org:41979, jenkins-hbase4.apache.org:43237, jenkins-hbase4.apache.org:45681], Tables:[hbase:meta, hbase:namespace, unmovedTable, hbase:rsgroup, testRename], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-19 05:15:09,626 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-19 05:15:09,626 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-19 05:15:09,627 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-19 05:15:09,627 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-19 05:15:09,627 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup Group_testDisabledTableMove_844815131 2023-07-19 05:15:09,629 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 05:15:09,629 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-19 05:15:09,630 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testDisabledTableMove_844815131 2023-07-19 05:15:09,637 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-19 05:15:09,639 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-19 05:15:09,641 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-19 05:15:09,641 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-19 05:15:09,643 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:41899, jenkins-hbase4.apache.org:41979] to rsgroup Group_testDisabledTableMove_844815131 2023-07-19 05:15:09,644 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 05:15:09,645 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testDisabledTableMove_844815131 2023-07-19 05:15:09,645 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-19 05:15:09,645 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-19 05:15:09,647 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group default, current retry=0 2023-07-19 05:15:09,647 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,41899,1689743683228, jenkins-hbase4.apache.org,41979,1689743683435] are moved back to default 2023-07-19 05:15:09,647 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminServer(438): Move servers done: default => Group_testDisabledTableMove_844815131 2023-07-19 05:15:09,647 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-19 05:15:09,649 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-19 05:15:09,650 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-19 05:15:09,652 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=Group_testDisabledTableMove_844815131 2023-07-19 05:15:09,652 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-19 05:15:09,654 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'Group_testDisabledTableMove', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-19 05:15:09,655 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] procedure2.ProcedureExecutor(1029): Stored pid=135, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=Group_testDisabledTableMove 2023-07-19 05:15:09,657 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=135, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=Group_testDisabledTableMove execute state=CREATE_TABLE_PRE_OPERATION 2023-07-19 05:15:09,657 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "Group_testDisabledTableMove" procId is: 135 2023-07-19 05:15:09,657 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.MasterRpcServices(1230): Checking to see if procedure is done pid=135 2023-07-19 05:15:09,658 DEBUG [PEWorker-3] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 05:15:09,659 DEBUG [PEWorker-3] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testDisabledTableMove_844815131 2023-07-19 05:15:09,659 DEBUG [PEWorker-3] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-19 05:15:09,659 DEBUG [PEWorker-3] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-19 05:15:09,660 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'unmovedTable' 2023-07-19 05:15:09,661 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=135, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=Group_testDisabledTableMove execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-19 05:15:09,665 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/.tmp/data/default/Group_testDisabledTableMove/f509fde207e38f84ac82ec4942287b0e 2023-07-19 05:15:09,665 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/.tmp/data/default/Group_testDisabledTableMove/b16bf5879a55caee1b87f53fcc6d23a8 2023-07-19 05:15:09,665 DEBUG [HFileArchiver-7] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/.tmp/data/default/Group_testDisabledTableMove/1e7d8c962a73a0bcd69a15738a42741b 2023-07-19 05:15:09,665 DEBUG [HFileArchiver-8] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/.tmp/data/default/Group_testDisabledTableMove/917f8b9bc38b5f5778b174536a4cbe59 2023-07-19 05:15:09,665 DEBUG [HFileArchiver-6] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/.tmp/data/default/Group_testDisabledTableMove/5b6394742e577a895614c9ea74a2a6dd 2023-07-19 05:15:09,666 DEBUG [HFileArchiver-1] backup.HFileArchiver(153): Directory hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/.tmp/data/default/Group_testDisabledTableMove/f509fde207e38f84ac82ec4942287b0e empty. 2023-07-19 05:15:09,666 DEBUG [HFileArchiver-7] backup.HFileArchiver(153): Directory hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/.tmp/data/default/Group_testDisabledTableMove/1e7d8c962a73a0bcd69a15738a42741b empty. 2023-07-19 05:15:09,666 DEBUG [HFileArchiver-8] backup.HFileArchiver(153): Directory hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/.tmp/data/default/Group_testDisabledTableMove/917f8b9bc38b5f5778b174536a4cbe59 empty. 2023-07-19 05:15:09,666 DEBUG [HFileArchiver-3] backup.HFileArchiver(153): Directory hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/.tmp/data/default/Group_testDisabledTableMove/b16bf5879a55caee1b87f53fcc6d23a8 empty. 2023-07-19 05:15:09,666 DEBUG [HFileArchiver-6] backup.HFileArchiver(153): Directory hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/.tmp/data/default/Group_testDisabledTableMove/5b6394742e577a895614c9ea74a2a6dd empty. 2023-07-19 05:15:09,666 DEBUG [HFileArchiver-1] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/.tmp/data/default/Group_testDisabledTableMove/f509fde207e38f84ac82ec4942287b0e 2023-07-19 05:15:09,666 DEBUG [HFileArchiver-7] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/.tmp/data/default/Group_testDisabledTableMove/1e7d8c962a73a0bcd69a15738a42741b 2023-07-19 05:15:09,666 DEBUG [HFileArchiver-3] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/.tmp/data/default/Group_testDisabledTableMove/b16bf5879a55caee1b87f53fcc6d23a8 2023-07-19 05:15:09,666 DEBUG [HFileArchiver-6] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/.tmp/data/default/Group_testDisabledTableMove/5b6394742e577a895614c9ea74a2a6dd 2023-07-19 05:15:09,666 DEBUG [HFileArchiver-8] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/.tmp/data/default/Group_testDisabledTableMove/917f8b9bc38b5f5778b174536a4cbe59 2023-07-19 05:15:09,667 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived Group_testDisabledTableMove regions 2023-07-19 05:15:09,680 DEBUG [PEWorker-3] util.FSTableDescriptors(570): Wrote into hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/.tmp/data/default/Group_testDisabledTableMove/.tabledesc/.tableinfo.0000000001 2023-07-19 05:15:09,681 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(7675): creating {ENCODED => 1e7d8c962a73a0bcd69a15738a42741b, NAME => 'Group_testDisabledTableMove,i\xBF\x14i\xBE,1689743709654.1e7d8c962a73a0bcd69a15738a42741b.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'}, tableDescriptor='Group_testDisabledTableMove', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/.tmp 2023-07-19 05:15:09,681 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(7675): creating {ENCODED => 5b6394742e577a895614c9ea74a2a6dd, NAME => 'Group_testDisabledTableMove,aaaaa,1689743709654.5b6394742e577a895614c9ea74a2a6dd.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'}, tableDescriptor='Group_testDisabledTableMove', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/.tmp 2023-07-19 05:15:09,682 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(7675): creating {ENCODED => f509fde207e38f84ac82ec4942287b0e, NAME => 'Group_testDisabledTableMove,,1689743709654.f509fde207e38f84ac82ec4942287b0e.', STARTKEY => '', ENDKEY => 'aaaaa'}, tableDescriptor='Group_testDisabledTableMove', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/.tmp 2023-07-19 05:15:09,701 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,aaaaa,1689743709654.5b6394742e577a895614c9ea74a2a6dd.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-19 05:15:09,701 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,i\xBF\x14i\xBE,1689743709654.1e7d8c962a73a0bcd69a15738a42741b.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-19 05:15:09,701 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1604): Closing 5b6394742e577a895614c9ea74a2a6dd, disabling compactions & flushes 2023-07-19 05:15:09,701 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1604): Closing 1e7d8c962a73a0bcd69a15738a42741b, disabling compactions & flushes 2023-07-19 05:15:09,701 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,aaaaa,1689743709654.5b6394742e577a895614c9ea74a2a6dd. 2023-07-19 05:15:09,701 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,i\xBF\x14i\xBE,1689743709654.1e7d8c962a73a0bcd69a15738a42741b. 2023-07-19 05:15:09,701 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,aaaaa,1689743709654.5b6394742e577a895614c9ea74a2a6dd. 2023-07-19 05:15:09,702 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,i\xBF\x14i\xBE,1689743709654.1e7d8c962a73a0bcd69a15738a42741b. 2023-07-19 05:15:09,702 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,aaaaa,1689743709654.5b6394742e577a895614c9ea74a2a6dd. after waiting 0 ms 2023-07-19 05:15:09,702 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,aaaaa,1689743709654.5b6394742e577a895614c9ea74a2a6dd. 2023-07-19 05:15:09,702 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,i\xBF\x14i\xBE,1689743709654.1e7d8c962a73a0bcd69a15738a42741b. after waiting 0 ms 2023-07-19 05:15:09,702 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,aaaaa,1689743709654.5b6394742e577a895614c9ea74a2a6dd. 2023-07-19 05:15:09,702 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,i\xBF\x14i\xBE,1689743709654.1e7d8c962a73a0bcd69a15738a42741b. 2023-07-19 05:15:09,702 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1558): Region close journal for 5b6394742e577a895614c9ea74a2a6dd: 2023-07-19 05:15:09,702 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,i\xBF\x14i\xBE,1689743709654.1e7d8c962a73a0bcd69a15738a42741b. 2023-07-19 05:15:09,702 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1558): Region close journal for 1e7d8c962a73a0bcd69a15738a42741b: 2023-07-19 05:15:09,703 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(7675): creating {ENCODED => 917f8b9bc38b5f5778b174536a4cbe59, NAME => 'Group_testDisabledTableMove,zzzzz,1689743709654.917f8b9bc38b5f5778b174536a4cbe59.', STARTKEY => 'zzzzz', ENDKEY => ''}, tableDescriptor='Group_testDisabledTableMove', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/.tmp 2023-07-19 05:15:09,703 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(7675): creating {ENCODED => b16bf5879a55caee1b87f53fcc6d23a8, NAME => 'Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689743709654.b16bf5879a55caee1b87f53fcc6d23a8.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'}, tableDescriptor='Group_testDisabledTableMove', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/.tmp 2023-07-19 05:15:09,713 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,zzzzz,1689743709654.917f8b9bc38b5f5778b174536a4cbe59.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-19 05:15:09,713 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1604): Closing 917f8b9bc38b5f5778b174536a4cbe59, disabling compactions & flushes 2023-07-19 05:15:09,713 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,zzzzz,1689743709654.917f8b9bc38b5f5778b174536a4cbe59. 2023-07-19 05:15:09,713 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,zzzzz,1689743709654.917f8b9bc38b5f5778b174536a4cbe59. 2023-07-19 05:15:09,713 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,zzzzz,1689743709654.917f8b9bc38b5f5778b174536a4cbe59. after waiting 0 ms 2023-07-19 05:15:09,713 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,zzzzz,1689743709654.917f8b9bc38b5f5778b174536a4cbe59. 2023-07-19 05:15:09,713 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,zzzzz,1689743709654.917f8b9bc38b5f5778b174536a4cbe59. 2023-07-19 05:15:09,713 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1558): Region close journal for 917f8b9bc38b5f5778b174536a4cbe59: 2023-07-19 05:15:09,713 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689743709654.b16bf5879a55caee1b87f53fcc6d23a8.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-19 05:15:09,713 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1604): Closing b16bf5879a55caee1b87f53fcc6d23a8, disabling compactions & flushes 2023-07-19 05:15:09,713 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689743709654.b16bf5879a55caee1b87f53fcc6d23a8. 2023-07-19 05:15:09,713 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689743709654.b16bf5879a55caee1b87f53fcc6d23a8. 2023-07-19 05:15:09,713 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689743709654.b16bf5879a55caee1b87f53fcc6d23a8. after waiting 0 ms 2023-07-19 05:15:09,713 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689743709654.b16bf5879a55caee1b87f53fcc6d23a8. 2023-07-19 05:15:09,713 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689743709654.b16bf5879a55caee1b87f53fcc6d23a8. 2023-07-19 05:15:09,713 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1558): Region close journal for b16bf5879a55caee1b87f53fcc6d23a8: 2023-07-19 05:15:09,758 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.MasterRpcServices(1230): Checking to see if procedure is done pid=135 2023-07-19 05:15:09,959 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.MasterRpcServices(1230): Checking to see if procedure is done pid=135 2023-07-19 05:15:10,099 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,,1689743709654.f509fde207e38f84ac82ec4942287b0e.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-19 05:15:10,099 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1604): Closing f509fde207e38f84ac82ec4942287b0e, disabling compactions & flushes 2023-07-19 05:15:10,099 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,,1689743709654.f509fde207e38f84ac82ec4942287b0e. 2023-07-19 05:15:10,099 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,,1689743709654.f509fde207e38f84ac82ec4942287b0e. 2023-07-19 05:15:10,099 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,,1689743709654.f509fde207e38f84ac82ec4942287b0e. after waiting 0 ms 2023-07-19 05:15:10,099 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,,1689743709654.f509fde207e38f84ac82ec4942287b0e. 2023-07-19 05:15:10,099 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,,1689743709654.f509fde207e38f84ac82ec4942287b0e. 2023-07-19 05:15:10,099 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1558): Region close journal for f509fde207e38f84ac82ec4942287b0e: 2023-07-19 05:15:10,101 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=135, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=Group_testDisabledTableMove execute state=CREATE_TABLE_ADD_TO_META 2023-07-19 05:15:10,102 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,aaaaa,1689743709654.5b6394742e577a895614c9ea74a2a6dd.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689743710102"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689743710102"}]},"ts":"1689743710102"} 2023-07-19 05:15:10,102 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,i\\xBF\\x14i\\xBE,1689743709654.1e7d8c962a73a0bcd69a15738a42741b.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689743710102"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689743710102"}]},"ts":"1689743710102"} 2023-07-19 05:15:10,102 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,zzzzz,1689743709654.917f8b9bc38b5f5778b174536a4cbe59.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689743710102"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689743710102"}]},"ts":"1689743710102"} 2023-07-19 05:15:10,102 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,r\\x1C\\xC7r\\x1B,1689743709654.b16bf5879a55caee1b87f53fcc6d23a8.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689743710102"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689743710102"}]},"ts":"1689743710102"} 2023-07-19 05:15:10,103 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,,1689743709654.f509fde207e38f84ac82ec4942287b0e.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689743710102"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689743710102"}]},"ts":"1689743710102"} 2023-07-19 05:15:10,104 INFO [PEWorker-3] hbase.MetaTableAccessor(1496): Added 5 regions to meta. 2023-07-19 05:15:10,105 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=135, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=Group_testDisabledTableMove execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-19 05:15:10,105 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testDisabledTableMove","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689743710105"}]},"ts":"1689743710105"} 2023-07-19 05:15:10,106 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=Group_testDisabledTableMove, state=ENABLING in hbase:meta 2023-07-19 05:15:10,111 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-19 05:15:10,111 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-19 05:15:10,111 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-19 05:15:10,111 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-19 05:15:10,111 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=136, ppid=135, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=f509fde207e38f84ac82ec4942287b0e, ASSIGN}, {pid=137, ppid=135, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=5b6394742e577a895614c9ea74a2a6dd, ASSIGN}, {pid=138, ppid=135, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=1e7d8c962a73a0bcd69a15738a42741b, ASSIGN}, {pid=139, ppid=135, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=b16bf5879a55caee1b87f53fcc6d23a8, ASSIGN}, {pid=140, ppid=135, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=917f8b9bc38b5f5778b174536a4cbe59, ASSIGN}] 2023-07-19 05:15:10,113 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=138, ppid=135, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=1e7d8c962a73a0bcd69a15738a42741b, ASSIGN 2023-07-19 05:15:10,113 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=136, ppid=135, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=f509fde207e38f84ac82ec4942287b0e, ASSIGN 2023-07-19 05:15:10,113 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=137, ppid=135, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=5b6394742e577a895614c9ea74a2a6dd, ASSIGN 2023-07-19 05:15:10,113 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=140, ppid=135, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=917f8b9bc38b5f5778b174536a4cbe59, ASSIGN 2023-07-19 05:15:10,114 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=139, ppid=135, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=b16bf5879a55caee1b87f53fcc6d23a8, ASSIGN 2023-07-19 05:15:10,114 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=138, ppid=135, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=1e7d8c962a73a0bcd69a15738a42741b, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,45681,1689743683028; forceNewPlan=false, retain=false 2023-07-19 05:15:10,114 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=137, ppid=135, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=5b6394742e577a895614c9ea74a2a6dd, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,43237,1689743687175; forceNewPlan=false, retain=false 2023-07-19 05:15:10,114 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=140, ppid=135, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=917f8b9bc38b5f5778b174536a4cbe59, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,43237,1689743687175; forceNewPlan=false, retain=false 2023-07-19 05:15:10,114 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=136, ppid=135, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=f509fde207e38f84ac82ec4942287b0e, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,43237,1689743687175; forceNewPlan=false, retain=false 2023-07-19 05:15:10,115 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=139, ppid=135, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=b16bf5879a55caee1b87f53fcc6d23a8, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,45681,1689743683028; forceNewPlan=false, retain=false 2023-07-19 05:15:10,260 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.MasterRpcServices(1230): Checking to see if procedure is done pid=135 2023-07-19 05:15:10,264 INFO [jenkins-hbase4:35853] balancer.BaseLoadBalancer(1545): Reassigned 5 regions. 5 retained the pre-restart assignment. 2023-07-19 05:15:10,267 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=137 updating hbase:meta row=5b6394742e577a895614c9ea74a2a6dd, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,43237,1689743687175 2023-07-19 05:15:10,267 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=136 updating hbase:meta row=f509fde207e38f84ac82ec4942287b0e, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,43237,1689743687175 2023-07-19 05:15:10,268 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,aaaaa,1689743709654.5b6394742e577a895614c9ea74a2a6dd.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689743710267"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689743710267"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689743710267"}]},"ts":"1689743710267"} 2023-07-19 05:15:10,267 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=139 updating hbase:meta row=b16bf5879a55caee1b87f53fcc6d23a8, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,45681,1689743683028 2023-07-19 05:15:10,268 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,,1689743709654.f509fde207e38f84ac82ec4942287b0e.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689743710267"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689743710267"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689743710267"}]},"ts":"1689743710267"} 2023-07-19 05:15:10,268 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,r\\x1C\\xC7r\\x1B,1689743709654.b16bf5879a55caee1b87f53fcc6d23a8.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689743710267"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689743710267"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689743710267"}]},"ts":"1689743710267"} 2023-07-19 05:15:10,267 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=138 updating hbase:meta row=1e7d8c962a73a0bcd69a15738a42741b, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,45681,1689743683028 2023-07-19 05:15:10,267 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=140 updating hbase:meta row=917f8b9bc38b5f5778b174536a4cbe59, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,43237,1689743687175 2023-07-19 05:15:10,268 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,i\\xBF\\x14i\\xBE,1689743709654.1e7d8c962a73a0bcd69a15738a42741b.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689743710267"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689743710267"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689743710267"}]},"ts":"1689743710267"} 2023-07-19 05:15:10,268 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,zzzzz,1689743709654.917f8b9bc38b5f5778b174536a4cbe59.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689743710267"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689743710267"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689743710267"}]},"ts":"1689743710267"} 2023-07-19 05:15:10,269 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=141, ppid=137, state=RUNNABLE; OpenRegionProcedure 5b6394742e577a895614c9ea74a2a6dd, server=jenkins-hbase4.apache.org,43237,1689743687175}] 2023-07-19 05:15:10,270 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=142, ppid=136, state=RUNNABLE; OpenRegionProcedure f509fde207e38f84ac82ec4942287b0e, server=jenkins-hbase4.apache.org,43237,1689743687175}] 2023-07-19 05:15:10,270 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=143, ppid=139, state=RUNNABLE; OpenRegionProcedure b16bf5879a55caee1b87f53fcc6d23a8, server=jenkins-hbase4.apache.org,45681,1689743683028}] 2023-07-19 05:15:10,271 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=144, ppid=138, state=RUNNABLE; OpenRegionProcedure 1e7d8c962a73a0bcd69a15738a42741b, server=jenkins-hbase4.apache.org,45681,1689743683028}] 2023-07-19 05:15:10,274 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=145, ppid=140, state=RUNNABLE; OpenRegionProcedure 917f8b9bc38b5f5778b174536a4cbe59, server=jenkins-hbase4.apache.org,43237,1689743687175}] 2023-07-19 05:15:10,425 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testDisabledTableMove,zzzzz,1689743709654.917f8b9bc38b5f5778b174536a4cbe59. 2023-07-19 05:15:10,425 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 917f8b9bc38b5f5778b174536a4cbe59, NAME => 'Group_testDisabledTableMove,zzzzz,1689743709654.917f8b9bc38b5f5778b174536a4cbe59.', STARTKEY => 'zzzzz', ENDKEY => ''} 2023-07-19 05:15:10,426 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testDisabledTableMove 917f8b9bc38b5f5778b174536a4cbe59 2023-07-19 05:15:10,426 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,zzzzz,1689743709654.917f8b9bc38b5f5778b174536a4cbe59.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-19 05:15:10,426 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 917f8b9bc38b5f5778b174536a4cbe59 2023-07-19 05:15:10,426 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 917f8b9bc38b5f5778b174536a4cbe59 2023-07-19 05:15:10,427 INFO [StoreOpener-917f8b9bc38b5f5778b174536a4cbe59-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 917f8b9bc38b5f5778b174536a4cbe59 2023-07-19 05:15:10,428 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testDisabledTableMove,i\xBF\x14i\xBE,1689743709654.1e7d8c962a73a0bcd69a15738a42741b. 2023-07-19 05:15:10,428 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1e7d8c962a73a0bcd69a15738a42741b, NAME => 'Group_testDisabledTableMove,i\xBF\x14i\xBE,1689743709654.1e7d8c962a73a0bcd69a15738a42741b.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'} 2023-07-19 05:15:10,428 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testDisabledTableMove 1e7d8c962a73a0bcd69a15738a42741b 2023-07-19 05:15:10,428 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,i\xBF\x14i\xBE,1689743709654.1e7d8c962a73a0bcd69a15738a42741b.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-19 05:15:10,429 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 1e7d8c962a73a0bcd69a15738a42741b 2023-07-19 05:15:10,429 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 1e7d8c962a73a0bcd69a15738a42741b 2023-07-19 05:15:10,429 DEBUG [StoreOpener-917f8b9bc38b5f5778b174536a4cbe59-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/data/default/Group_testDisabledTableMove/917f8b9bc38b5f5778b174536a4cbe59/f 2023-07-19 05:15:10,429 DEBUG [StoreOpener-917f8b9bc38b5f5778b174536a4cbe59-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/data/default/Group_testDisabledTableMove/917f8b9bc38b5f5778b174536a4cbe59/f 2023-07-19 05:15:10,429 INFO [StoreOpener-917f8b9bc38b5f5778b174536a4cbe59-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 917f8b9bc38b5f5778b174536a4cbe59 columnFamilyName f 2023-07-19 05:15:10,430 INFO [StoreOpener-1e7d8c962a73a0bcd69a15738a42741b-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 1e7d8c962a73a0bcd69a15738a42741b 2023-07-19 05:15:10,430 INFO [StoreOpener-917f8b9bc38b5f5778b174536a4cbe59-1] regionserver.HStore(310): Store=917f8b9bc38b5f5778b174536a4cbe59/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-19 05:15:10,430 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/data/default/Group_testDisabledTableMove/917f8b9bc38b5f5778b174536a4cbe59 2023-07-19 05:15:10,431 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/data/default/Group_testDisabledTableMove/917f8b9bc38b5f5778b174536a4cbe59 2023-07-19 05:15:10,431 DEBUG [StoreOpener-1e7d8c962a73a0bcd69a15738a42741b-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/data/default/Group_testDisabledTableMove/1e7d8c962a73a0bcd69a15738a42741b/f 2023-07-19 05:15:10,431 DEBUG [StoreOpener-1e7d8c962a73a0bcd69a15738a42741b-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/data/default/Group_testDisabledTableMove/1e7d8c962a73a0bcd69a15738a42741b/f 2023-07-19 05:15:10,431 INFO [StoreOpener-1e7d8c962a73a0bcd69a15738a42741b-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1e7d8c962a73a0bcd69a15738a42741b columnFamilyName f 2023-07-19 05:15:10,432 INFO [StoreOpener-1e7d8c962a73a0bcd69a15738a42741b-1] regionserver.HStore(310): Store=1e7d8c962a73a0bcd69a15738a42741b/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-19 05:15:10,433 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/data/default/Group_testDisabledTableMove/1e7d8c962a73a0bcd69a15738a42741b 2023-07-19 05:15:10,433 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/data/default/Group_testDisabledTableMove/1e7d8c962a73a0bcd69a15738a42741b 2023-07-19 05:15:10,434 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 917f8b9bc38b5f5778b174536a4cbe59 2023-07-19 05:15:10,436 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 1e7d8c962a73a0bcd69a15738a42741b 2023-07-19 05:15:10,436 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/data/default/Group_testDisabledTableMove/917f8b9bc38b5f5778b174536a4cbe59/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-19 05:15:10,437 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 917f8b9bc38b5f5778b174536a4cbe59; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10202665280, jitterRate=-0.0498027503490448}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-19 05:15:10,437 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 917f8b9bc38b5f5778b174536a4cbe59: 2023-07-19 05:15:10,439 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/data/default/Group_testDisabledTableMove/1e7d8c962a73a0bcd69a15738a42741b/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-19 05:15:10,439 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testDisabledTableMove,zzzzz,1689743709654.917f8b9bc38b5f5778b174536a4cbe59., pid=145, masterSystemTime=1689743710421 2023-07-19 05:15:10,439 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 1e7d8c962a73a0bcd69a15738a42741b; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10901501600, jitterRate=0.015281453728675842}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-19 05:15:10,439 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 1e7d8c962a73a0bcd69a15738a42741b: 2023-07-19 05:15:10,440 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testDisabledTableMove,i\xBF\x14i\xBE,1689743709654.1e7d8c962a73a0bcd69a15738a42741b., pid=144, masterSystemTime=1689743710425 2023-07-19 05:15:10,441 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testDisabledTableMove,zzzzz,1689743709654.917f8b9bc38b5f5778b174536a4cbe59. 2023-07-19 05:15:10,441 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testDisabledTableMove,zzzzz,1689743709654.917f8b9bc38b5f5778b174536a4cbe59. 2023-07-19 05:15:10,441 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testDisabledTableMove,,1689743709654.f509fde207e38f84ac82ec4942287b0e. 2023-07-19 05:15:10,442 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=140 updating hbase:meta row=917f8b9bc38b5f5778b174536a4cbe59, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,43237,1689743687175 2023-07-19 05:15:10,442 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => f509fde207e38f84ac82ec4942287b0e, NAME => 'Group_testDisabledTableMove,,1689743709654.f509fde207e38f84ac82ec4942287b0e.', STARTKEY => '', ENDKEY => 'aaaaa'} 2023-07-19 05:15:10,442 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testDisabledTableMove,zzzzz,1689743709654.917f8b9bc38b5f5778b174536a4cbe59.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689743710441"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689743710441"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689743710441"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689743710441"}]},"ts":"1689743710441"} 2023-07-19 05:15:10,442 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testDisabledTableMove f509fde207e38f84ac82ec4942287b0e 2023-07-19 05:15:10,442 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,,1689743709654.f509fde207e38f84ac82ec4942287b0e.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-19 05:15:10,442 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for f509fde207e38f84ac82ec4942287b0e 2023-07-19 05:15:10,442 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for f509fde207e38f84ac82ec4942287b0e 2023-07-19 05:15:10,443 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testDisabledTableMove,i\xBF\x14i\xBE,1689743709654.1e7d8c962a73a0bcd69a15738a42741b. 2023-07-19 05:15:10,443 INFO [StoreOpener-f509fde207e38f84ac82ec4942287b0e-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region f509fde207e38f84ac82ec4942287b0e 2023-07-19 05:15:10,443 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testDisabledTableMove,i\xBF\x14i\xBE,1689743709654.1e7d8c962a73a0bcd69a15738a42741b. 2023-07-19 05:15:10,444 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689743709654.b16bf5879a55caee1b87f53fcc6d23a8. 2023-07-19 05:15:10,444 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=138 updating hbase:meta row=1e7d8c962a73a0bcd69a15738a42741b, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,45681,1689743683028 2023-07-19 05:15:10,444 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => b16bf5879a55caee1b87f53fcc6d23a8, NAME => 'Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689743709654.b16bf5879a55caee1b87f53fcc6d23a8.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'} 2023-07-19 05:15:10,444 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testDisabledTableMove,i\\xBF\\x14i\\xBE,1689743709654.1e7d8c962a73a0bcd69a15738a42741b.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689743710444"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689743710444"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689743710444"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689743710444"}]},"ts":"1689743710444"} 2023-07-19 05:15:10,444 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testDisabledTableMove b16bf5879a55caee1b87f53fcc6d23a8 2023-07-19 05:15:10,444 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689743709654.b16bf5879a55caee1b87f53fcc6d23a8.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-19 05:15:10,444 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for b16bf5879a55caee1b87f53fcc6d23a8 2023-07-19 05:15:10,444 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for b16bf5879a55caee1b87f53fcc6d23a8 2023-07-19 05:15:10,446 DEBUG [StoreOpener-f509fde207e38f84ac82ec4942287b0e-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/data/default/Group_testDisabledTableMove/f509fde207e38f84ac82ec4942287b0e/f 2023-07-19 05:15:10,446 DEBUG [StoreOpener-f509fde207e38f84ac82ec4942287b0e-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/data/default/Group_testDisabledTableMove/f509fde207e38f84ac82ec4942287b0e/f 2023-07-19 05:15:10,446 INFO [StoreOpener-f509fde207e38f84ac82ec4942287b0e-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region f509fde207e38f84ac82ec4942287b0e columnFamilyName f 2023-07-19 05:15:10,447 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=145, resume processing ppid=140 2023-07-19 05:15:10,447 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=145, ppid=140, state=SUCCESS; OpenRegionProcedure 917f8b9bc38b5f5778b174536a4cbe59, server=jenkins-hbase4.apache.org,43237,1689743687175 in 171 msec 2023-07-19 05:15:10,447 INFO [StoreOpener-f509fde207e38f84ac82ec4942287b0e-1] regionserver.HStore(310): Store=f509fde207e38f84ac82ec4942287b0e/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-19 05:15:10,448 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=144, resume processing ppid=138 2023-07-19 05:15:10,448 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=144, ppid=138, state=SUCCESS; OpenRegionProcedure 1e7d8c962a73a0bcd69a15738a42741b, server=jenkins-hbase4.apache.org,45681,1689743683028 in 175 msec 2023-07-19 05:15:10,448 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=140, ppid=135, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=917f8b9bc38b5f5778b174536a4cbe59, ASSIGN in 336 msec 2023-07-19 05:15:10,449 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=138, ppid=135, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=1e7d8c962a73a0bcd69a15738a42741b, ASSIGN in 337 msec 2023-07-19 05:15:10,450 INFO [StoreOpener-b16bf5879a55caee1b87f53fcc6d23a8-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region b16bf5879a55caee1b87f53fcc6d23a8 2023-07-19 05:15:10,451 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/data/default/Group_testDisabledTableMove/f509fde207e38f84ac82ec4942287b0e 2023-07-19 05:15:10,451 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/data/default/Group_testDisabledTableMove/f509fde207e38f84ac82ec4942287b0e 2023-07-19 05:15:10,452 DEBUG [StoreOpener-b16bf5879a55caee1b87f53fcc6d23a8-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/data/default/Group_testDisabledTableMove/b16bf5879a55caee1b87f53fcc6d23a8/f 2023-07-19 05:15:10,452 DEBUG [StoreOpener-b16bf5879a55caee1b87f53fcc6d23a8-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/data/default/Group_testDisabledTableMove/b16bf5879a55caee1b87f53fcc6d23a8/f 2023-07-19 05:15:10,452 INFO [StoreOpener-b16bf5879a55caee1b87f53fcc6d23a8-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region b16bf5879a55caee1b87f53fcc6d23a8 columnFamilyName f 2023-07-19 05:15:10,453 INFO [StoreOpener-b16bf5879a55caee1b87f53fcc6d23a8-1] regionserver.HStore(310): Store=b16bf5879a55caee1b87f53fcc6d23a8/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-19 05:15:10,453 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/data/default/Group_testDisabledTableMove/b16bf5879a55caee1b87f53fcc6d23a8 2023-07-19 05:15:10,454 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/data/default/Group_testDisabledTableMove/b16bf5879a55caee1b87f53fcc6d23a8 2023-07-19 05:15:10,454 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for f509fde207e38f84ac82ec4942287b0e 2023-07-19 05:15:10,456 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/data/default/Group_testDisabledTableMove/f509fde207e38f84ac82ec4942287b0e/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-19 05:15:10,456 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for b16bf5879a55caee1b87f53fcc6d23a8 2023-07-19 05:15:10,457 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened f509fde207e38f84ac82ec4942287b0e; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9739088800, jitterRate=-0.09297667443752289}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-19 05:15:10,457 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for f509fde207e38f84ac82ec4942287b0e: 2023-07-19 05:15:10,457 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testDisabledTableMove,,1689743709654.f509fde207e38f84ac82ec4942287b0e., pid=142, masterSystemTime=1689743710421 2023-07-19 05:15:10,459 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testDisabledTableMove,,1689743709654.f509fde207e38f84ac82ec4942287b0e. 2023-07-19 05:15:10,459 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testDisabledTableMove,,1689743709654.f509fde207e38f84ac82ec4942287b0e. 2023-07-19 05:15:10,459 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testDisabledTableMove,aaaaa,1689743709654.5b6394742e577a895614c9ea74a2a6dd. 2023-07-19 05:15:10,459 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/data/default/Group_testDisabledTableMove/b16bf5879a55caee1b87f53fcc6d23a8/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-19 05:15:10,459 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 5b6394742e577a895614c9ea74a2a6dd, NAME => 'Group_testDisabledTableMove,aaaaa,1689743709654.5b6394742e577a895614c9ea74a2a6dd.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'} 2023-07-19 05:15:10,459 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=136 updating hbase:meta row=f509fde207e38f84ac82ec4942287b0e, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,43237,1689743687175 2023-07-19 05:15:10,459 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testDisabledTableMove,,1689743709654.f509fde207e38f84ac82ec4942287b0e.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689743710459"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689743710459"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689743710459"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689743710459"}]},"ts":"1689743710459"} 2023-07-19 05:15:10,459 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testDisabledTableMove 5b6394742e577a895614c9ea74a2a6dd 2023-07-19 05:15:10,459 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,aaaaa,1689743709654.5b6394742e577a895614c9ea74a2a6dd.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-19 05:15:10,459 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened b16bf5879a55caee1b87f53fcc6d23a8; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10277417120, jitterRate=-0.04284094274044037}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-19 05:15:10,459 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 5b6394742e577a895614c9ea74a2a6dd 2023-07-19 05:15:10,459 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for b16bf5879a55caee1b87f53fcc6d23a8: 2023-07-19 05:15:10,459 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 5b6394742e577a895614c9ea74a2a6dd 2023-07-19 05:15:10,460 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689743709654.b16bf5879a55caee1b87f53fcc6d23a8., pid=143, masterSystemTime=1689743710425 2023-07-19 05:15:10,461 INFO [StoreOpener-5b6394742e577a895614c9ea74a2a6dd-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 5b6394742e577a895614c9ea74a2a6dd 2023-07-19 05:15:10,462 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689743709654.b16bf5879a55caee1b87f53fcc6d23a8. 2023-07-19 05:15:10,462 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689743709654.b16bf5879a55caee1b87f53fcc6d23a8. 2023-07-19 05:15:10,462 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=142, resume processing ppid=136 2023-07-19 05:15:10,462 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=139 updating hbase:meta row=b16bf5879a55caee1b87f53fcc6d23a8, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,45681,1689743683028 2023-07-19 05:15:10,462 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=142, ppid=136, state=SUCCESS; OpenRegionProcedure f509fde207e38f84ac82ec4942287b0e, server=jenkins-hbase4.apache.org,43237,1689743687175 in 190 msec 2023-07-19 05:15:10,462 DEBUG [StoreOpener-5b6394742e577a895614c9ea74a2a6dd-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/data/default/Group_testDisabledTableMove/5b6394742e577a895614c9ea74a2a6dd/f 2023-07-19 05:15:10,463 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testDisabledTableMove,r\\x1C\\xC7r\\x1B,1689743709654.b16bf5879a55caee1b87f53fcc6d23a8.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689743710462"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689743710462"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689743710462"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689743710462"}]},"ts":"1689743710462"} 2023-07-19 05:15:10,463 DEBUG [StoreOpener-5b6394742e577a895614c9ea74a2a6dd-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/data/default/Group_testDisabledTableMove/5b6394742e577a895614c9ea74a2a6dd/f 2023-07-19 05:15:10,463 INFO [StoreOpener-5b6394742e577a895614c9ea74a2a6dd-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 5b6394742e577a895614c9ea74a2a6dd columnFamilyName f 2023-07-19 05:15:10,464 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=136, ppid=135, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=f509fde207e38f84ac82ec4942287b0e, ASSIGN in 351 msec 2023-07-19 05:15:10,465 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=143, resume processing ppid=139 2023-07-19 05:15:10,466 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=143, ppid=139, state=SUCCESS; OpenRegionProcedure b16bf5879a55caee1b87f53fcc6d23a8, server=jenkins-hbase4.apache.org,45681,1689743683028 in 194 msec 2023-07-19 05:15:10,467 INFO [StoreOpener-5b6394742e577a895614c9ea74a2a6dd-1] regionserver.HStore(310): Store=5b6394742e577a895614c9ea74a2a6dd/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-19 05:15:10,467 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=139, ppid=135, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=b16bf5879a55caee1b87f53fcc6d23a8, ASSIGN in 355 msec 2023-07-19 05:15:10,467 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/data/default/Group_testDisabledTableMove/5b6394742e577a895614c9ea74a2a6dd 2023-07-19 05:15:10,468 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/data/default/Group_testDisabledTableMove/5b6394742e577a895614c9ea74a2a6dd 2023-07-19 05:15:10,470 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 5b6394742e577a895614c9ea74a2a6dd 2023-07-19 05:15:10,472 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/data/default/Group_testDisabledTableMove/5b6394742e577a895614c9ea74a2a6dd/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-19 05:15:10,473 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 5b6394742e577a895614c9ea74a2a6dd; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11220781760, jitterRate=0.045016735792160034}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-19 05:15:10,473 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 5b6394742e577a895614c9ea74a2a6dd: 2023-07-19 05:15:10,473 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testDisabledTableMove,aaaaa,1689743709654.5b6394742e577a895614c9ea74a2a6dd., pid=141, masterSystemTime=1689743710421 2023-07-19 05:15:10,475 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testDisabledTableMove,aaaaa,1689743709654.5b6394742e577a895614c9ea74a2a6dd. 2023-07-19 05:15:10,475 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testDisabledTableMove,aaaaa,1689743709654.5b6394742e577a895614c9ea74a2a6dd. 2023-07-19 05:15:10,478 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=137 updating hbase:meta row=5b6394742e577a895614c9ea74a2a6dd, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,43237,1689743687175 2023-07-19 05:15:10,479 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testDisabledTableMove,aaaaa,1689743709654.5b6394742e577a895614c9ea74a2a6dd.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689743710478"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689743710478"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689743710478"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689743710478"}]},"ts":"1689743710478"} 2023-07-19 05:15:10,482 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=141, resume processing ppid=137 2023-07-19 05:15:10,482 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=141, ppid=137, state=SUCCESS; OpenRegionProcedure 5b6394742e577a895614c9ea74a2a6dd, server=jenkins-hbase4.apache.org,43237,1689743687175 in 211 msec 2023-07-19 05:15:10,483 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=137, resume processing ppid=135 2023-07-19 05:15:10,483 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=137, ppid=135, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=5b6394742e577a895614c9ea74a2a6dd, ASSIGN in 371 msec 2023-07-19 05:15:10,484 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=135, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=Group_testDisabledTableMove execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-19 05:15:10,484 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testDisabledTableMove","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689743710484"}]},"ts":"1689743710484"} 2023-07-19 05:15:10,485 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=Group_testDisabledTableMove, state=ENABLED in hbase:meta 2023-07-19 05:15:10,487 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=135, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=Group_testDisabledTableMove execute state=CREATE_TABLE_POST_OPERATION 2023-07-19 05:15:10,489 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=135, state=SUCCESS; CreateTableProcedure table=Group_testDisabledTableMove in 833 msec 2023-07-19 05:15:10,761 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.MasterRpcServices(1230): Checking to see if procedure is done pid=135 2023-07-19 05:15:10,761 INFO [Listener at localhost/38799] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:Group_testDisabledTableMove, procId: 135 completed 2023-07-19 05:15:10,761 DEBUG [Listener at localhost/38799] hbase.HBaseTestingUtility(3430): Waiting until all regions of table Group_testDisabledTableMove get assigned. Timeout = 60000ms 2023-07-19 05:15:10,762 INFO [Listener at localhost/38799] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-19 05:15:10,766 INFO [Listener at localhost/38799] hbase.HBaseTestingUtility(3484): All regions for table Group_testDisabledTableMove assigned to meta. Checking AM states. 2023-07-19 05:15:10,766 INFO [Listener at localhost/38799] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-19 05:15:10,766 INFO [Listener at localhost/38799] hbase.HBaseTestingUtility(3504): All regions for table Group_testDisabledTableMove assigned. 2023-07-19 05:15:10,767 INFO [Listener at localhost/38799] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-19 05:15:10,774 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=Group_testDisabledTableMove 2023-07-19 05:15:10,774 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-19 05:15:10,775 INFO [Listener at localhost/38799] client.HBaseAdmin$15(890): Started disable of Group_testDisabledTableMove 2023-07-19 05:15:10,775 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable Group_testDisabledTableMove 2023-07-19 05:15:10,776 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] procedure2.ProcedureExecutor(1029): Stored pid=146, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=Group_testDisabledTableMove 2023-07-19 05:15:10,779 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.MasterRpcServices(1230): Checking to see if procedure is done pid=146 2023-07-19 05:15:10,779 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testDisabledTableMove","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689743710779"}]},"ts":"1689743710779"} 2023-07-19 05:15:10,780 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=Group_testDisabledTableMove, state=DISABLING in hbase:meta 2023-07-19 05:15:10,782 INFO [PEWorker-2] procedure.DisableTableProcedure(293): Set Group_testDisabledTableMove to state=DISABLING 2023-07-19 05:15:10,782 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=147, ppid=146, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=f509fde207e38f84ac82ec4942287b0e, UNASSIGN}, {pid=148, ppid=146, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=5b6394742e577a895614c9ea74a2a6dd, UNASSIGN}, {pid=149, ppid=146, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=1e7d8c962a73a0bcd69a15738a42741b, UNASSIGN}, {pid=150, ppid=146, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=b16bf5879a55caee1b87f53fcc6d23a8, UNASSIGN}, {pid=151, ppid=146, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=917f8b9bc38b5f5778b174536a4cbe59, UNASSIGN}] 2023-07-19 05:15:10,784 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=147, ppid=146, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=f509fde207e38f84ac82ec4942287b0e, UNASSIGN 2023-07-19 05:15:10,784 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=148, ppid=146, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=5b6394742e577a895614c9ea74a2a6dd, UNASSIGN 2023-07-19 05:15:10,784 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=149, ppid=146, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=1e7d8c962a73a0bcd69a15738a42741b, UNASSIGN 2023-07-19 05:15:10,785 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=150, ppid=146, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=b16bf5879a55caee1b87f53fcc6d23a8, UNASSIGN 2023-07-19 05:15:10,785 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=151, ppid=146, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=917f8b9bc38b5f5778b174536a4cbe59, UNASSIGN 2023-07-19 05:15:10,785 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=147 updating hbase:meta row=f509fde207e38f84ac82ec4942287b0e, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,43237,1689743687175 2023-07-19 05:15:10,785 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,,1689743709654.f509fde207e38f84ac82ec4942287b0e.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689743710785"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689743710785"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689743710785"}]},"ts":"1689743710785"} 2023-07-19 05:15:10,785 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=148 updating hbase:meta row=5b6394742e577a895614c9ea74a2a6dd, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,43237,1689743687175 2023-07-19 05:15:10,785 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=149 updating hbase:meta row=1e7d8c962a73a0bcd69a15738a42741b, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,45681,1689743683028 2023-07-19 05:15:10,785 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,i\\xBF\\x14i\\xBE,1689743709654.1e7d8c962a73a0bcd69a15738a42741b.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689743710785"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689743710785"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689743710785"}]},"ts":"1689743710785"} 2023-07-19 05:15:10,785 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,aaaaa,1689743709654.5b6394742e577a895614c9ea74a2a6dd.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689743710785"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689743710785"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689743710785"}]},"ts":"1689743710785"} 2023-07-19 05:15:10,785 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=150 updating hbase:meta row=b16bf5879a55caee1b87f53fcc6d23a8, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,45681,1689743683028 2023-07-19 05:15:10,786 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,r\\x1C\\xC7r\\x1B,1689743709654.b16bf5879a55caee1b87f53fcc6d23a8.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689743710785"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689743710785"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689743710785"}]},"ts":"1689743710785"} 2023-07-19 05:15:10,786 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=151 updating hbase:meta row=917f8b9bc38b5f5778b174536a4cbe59, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,43237,1689743687175 2023-07-19 05:15:10,786 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,zzzzz,1689743709654.917f8b9bc38b5f5778b174536a4cbe59.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689743710786"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689743710786"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689743710786"}]},"ts":"1689743710786"} 2023-07-19 05:15:10,787 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=152, ppid=147, state=RUNNABLE; CloseRegionProcedure f509fde207e38f84ac82ec4942287b0e, server=jenkins-hbase4.apache.org,43237,1689743687175}] 2023-07-19 05:15:10,787 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=153, ppid=149, state=RUNNABLE; CloseRegionProcedure 1e7d8c962a73a0bcd69a15738a42741b, server=jenkins-hbase4.apache.org,45681,1689743683028}] 2023-07-19 05:15:10,788 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=154, ppid=148, state=RUNNABLE; CloseRegionProcedure 5b6394742e577a895614c9ea74a2a6dd, server=jenkins-hbase4.apache.org,43237,1689743687175}] 2023-07-19 05:15:10,789 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=155, ppid=150, state=RUNNABLE; CloseRegionProcedure b16bf5879a55caee1b87f53fcc6d23a8, server=jenkins-hbase4.apache.org,45681,1689743683028}] 2023-07-19 05:15:10,790 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=156, ppid=151, state=RUNNABLE; CloseRegionProcedure 917f8b9bc38b5f5778b174536a4cbe59, server=jenkins-hbase4.apache.org,43237,1689743687175}] 2023-07-19 05:15:10,880 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.MasterRpcServices(1230): Checking to see if procedure is done pid=146 2023-07-19 05:15:10,940 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 5b6394742e577a895614c9ea74a2a6dd 2023-07-19 05:15:10,940 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 1e7d8c962a73a0bcd69a15738a42741b 2023-07-19 05:15:10,941 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 5b6394742e577a895614c9ea74a2a6dd, disabling compactions & flushes 2023-07-19 05:15:10,942 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 1e7d8c962a73a0bcd69a15738a42741b, disabling compactions & flushes 2023-07-19 05:15:10,942 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,aaaaa,1689743709654.5b6394742e577a895614c9ea74a2a6dd. 2023-07-19 05:15:10,942 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,aaaaa,1689743709654.5b6394742e577a895614c9ea74a2a6dd. 2023-07-19 05:15:10,942 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,i\xBF\x14i\xBE,1689743709654.1e7d8c962a73a0bcd69a15738a42741b. 2023-07-19 05:15:10,942 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,aaaaa,1689743709654.5b6394742e577a895614c9ea74a2a6dd. after waiting 0 ms 2023-07-19 05:15:10,942 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,i\xBF\x14i\xBE,1689743709654.1e7d8c962a73a0bcd69a15738a42741b. 2023-07-19 05:15:10,942 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,aaaaa,1689743709654.5b6394742e577a895614c9ea74a2a6dd. 2023-07-19 05:15:10,942 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,i\xBF\x14i\xBE,1689743709654.1e7d8c962a73a0bcd69a15738a42741b. after waiting 0 ms 2023-07-19 05:15:10,942 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,i\xBF\x14i\xBE,1689743709654.1e7d8c962a73a0bcd69a15738a42741b. 2023-07-19 05:15:10,946 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/data/default/Group_testDisabledTableMove/5b6394742e577a895614c9ea74a2a6dd/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-19 05:15:10,946 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/data/default/Group_testDisabledTableMove/1e7d8c962a73a0bcd69a15738a42741b/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-19 05:15:10,946 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,aaaaa,1689743709654.5b6394742e577a895614c9ea74a2a6dd. 2023-07-19 05:15:10,946 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,i\xBF\x14i\xBE,1689743709654.1e7d8c962a73a0bcd69a15738a42741b. 2023-07-19 05:15:10,947 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 5b6394742e577a895614c9ea74a2a6dd: 2023-07-19 05:15:10,947 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 1e7d8c962a73a0bcd69a15738a42741b: 2023-07-19 05:15:10,948 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 1e7d8c962a73a0bcd69a15738a42741b 2023-07-19 05:15:10,948 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close b16bf5879a55caee1b87f53fcc6d23a8 2023-07-19 05:15:10,949 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing b16bf5879a55caee1b87f53fcc6d23a8, disabling compactions & flushes 2023-07-19 05:15:10,949 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689743709654.b16bf5879a55caee1b87f53fcc6d23a8. 2023-07-19 05:15:10,949 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689743709654.b16bf5879a55caee1b87f53fcc6d23a8. 2023-07-19 05:15:10,949 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689743709654.b16bf5879a55caee1b87f53fcc6d23a8. after waiting 0 ms 2023-07-19 05:15:10,949 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689743709654.b16bf5879a55caee1b87f53fcc6d23a8. 2023-07-19 05:15:10,949 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=149 updating hbase:meta row=1e7d8c962a73a0bcd69a15738a42741b, regionState=CLOSED 2023-07-19 05:15:10,950 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,i\\xBF\\x14i\\xBE,1689743709654.1e7d8c962a73a0bcd69a15738a42741b.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689743710949"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689743710949"}]},"ts":"1689743710949"} 2023-07-19 05:15:10,950 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 5b6394742e577a895614c9ea74a2a6dd 2023-07-19 05:15:10,950 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close f509fde207e38f84ac82ec4942287b0e 2023-07-19 05:15:10,951 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing f509fde207e38f84ac82ec4942287b0e, disabling compactions & flushes 2023-07-19 05:15:10,951 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,,1689743709654.f509fde207e38f84ac82ec4942287b0e. 2023-07-19 05:15:10,951 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,,1689743709654.f509fde207e38f84ac82ec4942287b0e. 2023-07-19 05:15:10,951 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,,1689743709654.f509fde207e38f84ac82ec4942287b0e. after waiting 0 ms 2023-07-19 05:15:10,951 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,,1689743709654.f509fde207e38f84ac82ec4942287b0e. 2023-07-19 05:15:10,951 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=148 updating hbase:meta row=5b6394742e577a895614c9ea74a2a6dd, regionState=CLOSED 2023-07-19 05:15:10,951 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,aaaaa,1689743709654.5b6394742e577a895614c9ea74a2a6dd.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689743710951"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689743710951"}]},"ts":"1689743710951"} 2023-07-19 05:15:10,954 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/data/default/Group_testDisabledTableMove/b16bf5879a55caee1b87f53fcc6d23a8/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-19 05:15:10,955 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=153, resume processing ppid=149 2023-07-19 05:15:10,955 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=154, resume processing ppid=148 2023-07-19 05:15:10,955 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=153, ppid=149, state=SUCCESS; CloseRegionProcedure 1e7d8c962a73a0bcd69a15738a42741b, server=jenkins-hbase4.apache.org,45681,1689743683028 in 165 msec 2023-07-19 05:15:10,955 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=154, ppid=148, state=SUCCESS; CloseRegionProcedure 5b6394742e577a895614c9ea74a2a6dd, server=jenkins-hbase4.apache.org,43237,1689743687175 in 165 msec 2023-07-19 05:15:10,955 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689743709654.b16bf5879a55caee1b87f53fcc6d23a8. 2023-07-19 05:15:10,955 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/data/default/Group_testDisabledTableMove/f509fde207e38f84ac82ec4942287b0e/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-19 05:15:10,955 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for b16bf5879a55caee1b87f53fcc6d23a8: 2023-07-19 05:15:10,955 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,,1689743709654.f509fde207e38f84ac82ec4942287b0e. 2023-07-19 05:15:10,956 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for f509fde207e38f84ac82ec4942287b0e: 2023-07-19 05:15:10,956 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=149, ppid=146, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=1e7d8c962a73a0bcd69a15738a42741b, UNASSIGN in 173 msec 2023-07-19 05:15:10,956 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed b16bf5879a55caee1b87f53fcc6d23a8 2023-07-19 05:15:10,957 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=150 updating hbase:meta row=b16bf5879a55caee1b87f53fcc6d23a8, regionState=CLOSED 2023-07-19 05:15:10,957 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=148, ppid=146, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=5b6394742e577a895614c9ea74a2a6dd, UNASSIGN in 173 msec 2023-07-19 05:15:10,957 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,r\\x1C\\xC7r\\x1B,1689743709654.b16bf5879a55caee1b87f53fcc6d23a8.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689743710957"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689743710957"}]},"ts":"1689743710957"} 2023-07-19 05:15:10,957 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed f509fde207e38f84ac82ec4942287b0e 2023-07-19 05:15:10,957 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 917f8b9bc38b5f5778b174536a4cbe59 2023-07-19 05:15:10,958 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 917f8b9bc38b5f5778b174536a4cbe59, disabling compactions & flushes 2023-07-19 05:15:10,958 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,zzzzz,1689743709654.917f8b9bc38b5f5778b174536a4cbe59. 2023-07-19 05:15:10,958 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,zzzzz,1689743709654.917f8b9bc38b5f5778b174536a4cbe59. 2023-07-19 05:15:10,958 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,zzzzz,1689743709654.917f8b9bc38b5f5778b174536a4cbe59. after waiting 0 ms 2023-07-19 05:15:10,958 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,zzzzz,1689743709654.917f8b9bc38b5f5778b174536a4cbe59. 2023-07-19 05:15:10,958 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=147 updating hbase:meta row=f509fde207e38f84ac82ec4942287b0e, regionState=CLOSED 2023-07-19 05:15:10,959 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,,1689743709654.f509fde207e38f84ac82ec4942287b0e.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689743710958"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689743710958"}]},"ts":"1689743710958"} 2023-07-19 05:15:10,961 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=155, resume processing ppid=150 2023-07-19 05:15:10,961 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=152, resume processing ppid=147 2023-07-19 05:15:10,961 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=155, ppid=150, state=SUCCESS; CloseRegionProcedure b16bf5879a55caee1b87f53fcc6d23a8, server=jenkins-hbase4.apache.org,45681,1689743683028 in 170 msec 2023-07-19 05:15:10,961 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=152, ppid=147, state=SUCCESS; CloseRegionProcedure f509fde207e38f84ac82ec4942287b0e, server=jenkins-hbase4.apache.org,43237,1689743687175 in 173 msec 2023-07-19 05:15:10,962 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/data/default/Group_testDisabledTableMove/917f8b9bc38b5f5778b174536a4cbe59/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-19 05:15:10,962 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=150, ppid=146, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=b16bf5879a55caee1b87f53fcc6d23a8, UNASSIGN in 179 msec 2023-07-19 05:15:10,962 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=147, ppid=146, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=f509fde207e38f84ac82ec4942287b0e, UNASSIGN in 179 msec 2023-07-19 05:15:10,963 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,zzzzz,1689743709654.917f8b9bc38b5f5778b174536a4cbe59. 2023-07-19 05:15:10,963 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 917f8b9bc38b5f5778b174536a4cbe59: 2023-07-19 05:15:10,964 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 917f8b9bc38b5f5778b174536a4cbe59 2023-07-19 05:15:10,964 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=151 updating hbase:meta row=917f8b9bc38b5f5778b174536a4cbe59, regionState=CLOSED 2023-07-19 05:15:10,964 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,zzzzz,1689743709654.917f8b9bc38b5f5778b174536a4cbe59.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689743710964"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689743710964"}]},"ts":"1689743710964"} 2023-07-19 05:15:10,967 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=156, resume processing ppid=151 2023-07-19 05:15:10,967 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=156, ppid=151, state=SUCCESS; CloseRegionProcedure 917f8b9bc38b5f5778b174536a4cbe59, server=jenkins-hbase4.apache.org,43237,1689743687175 in 176 msec 2023-07-19 05:15:10,968 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=151, resume processing ppid=146 2023-07-19 05:15:10,968 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=151, ppid=146, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=917f8b9bc38b5f5778b174536a4cbe59, UNASSIGN in 185 msec 2023-07-19 05:15:10,969 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testDisabledTableMove","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689743710969"}]},"ts":"1689743710969"} 2023-07-19 05:15:10,970 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=Group_testDisabledTableMove, state=DISABLED in hbase:meta 2023-07-19 05:15:10,974 INFO [PEWorker-2] procedure.DisableTableProcedure(305): Set Group_testDisabledTableMove to state=DISABLED 2023-07-19 05:15:10,975 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=146, state=SUCCESS; DisableTableProcedure table=Group_testDisabledTableMove in 199 msec 2023-07-19 05:15:11,081 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.MasterRpcServices(1230): Checking to see if procedure is done pid=146 2023-07-19 05:15:11,082 INFO [Listener at localhost/38799] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:Group_testDisabledTableMove, procId: 146 completed 2023-07-19 05:15:11,082 INFO [Listener at localhost/38799] rsgroup.TestRSGroupsAdmin1(370): Moving table Group_testDisabledTableMove to Group_testDisabledTableMove_844815131 2023-07-19 05:15:11,084 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [Group_testDisabledTableMove] to rsgroup Group_testDisabledTableMove_844815131 2023-07-19 05:15:11,086 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 05:15:11,086 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testDisabledTableMove_844815131 2023-07-19 05:15:11,087 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-19 05:15:11,087 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-19 05:15:11,088 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminServer(336): Skipping move regions because the table Group_testDisabledTableMove is disabled 2023-07-19 05:15:11,088 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group Group_testDisabledTableMove_844815131, current retry=0 2023-07-19 05:15:11,089 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminServer(369): All regions from table(s) [Group_testDisabledTableMove] moved to target group Group_testDisabledTableMove_844815131. 2023-07-19 05:15:11,089 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-19 05:15:11,091 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-19 05:15:11,091 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-19 05:15:11,094 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=Group_testDisabledTableMove 2023-07-19 05:15:11,094 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-19 05:15:11,095 INFO [Listener at localhost/38799] client.HBaseAdmin$15(890): Started disable of Group_testDisabledTableMove 2023-07-19 05:15:11,096 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable Group_testDisabledTableMove 2023-07-19 05:15:11,096 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.TableNotEnabledException: Group_testDisabledTableMove at org.apache.hadoop.hbase.master.procedure.AbstractStateMachineTableProcedure.preflightChecks(AbstractStateMachineTableProcedure.java:163) at org.apache.hadoop.hbase.master.procedure.DisableTableProcedure.(DisableTableProcedure.java:78) at org.apache.hadoop.hbase.master.HMaster$11.run(HMaster.java:2429) at org.apache.hadoop.hbase.master.procedure.MasterProcedureUtil.submitProcedure(MasterProcedureUtil.java:132) at org.apache.hadoop.hbase.master.HMaster.disableTable(HMaster.java:2413) at org.apache.hadoop.hbase.master.MasterRpcServices.disableTable(MasterRpcServices.java:787) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-19 05:15:11,096 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] ipc.CallRunner(144): callId: 925 service: MasterService methodName: DisableTable size: 87 connection: 172.31.14.131:46730 deadline: 1689743771096, exception=org.apache.hadoop.hbase.TableNotEnabledException: Group_testDisabledTableMove 2023-07-19 05:15:11,097 DEBUG [Listener at localhost/38799] hbase.HBaseTestingUtility(1826): Table: Group_testDisabledTableMove already disabled, so just deleting it. 2023-07-19 05:15:11,097 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.HMaster$5(2228): Client=jenkins//172.31.14.131 delete Group_testDisabledTableMove 2023-07-19 05:15:11,098 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] procedure2.ProcedureExecutor(1029): Stored pid=158, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=Group_testDisabledTableMove 2023-07-19 05:15:11,100 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=158, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=Group_testDisabledTableMove 2023-07-19 05:15:11,100 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 'Group_testDisabledTableMove' from rsgroup 'Group_testDisabledTableMove_844815131' 2023-07-19 05:15:11,101 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=158, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=Group_testDisabledTableMove 2023-07-19 05:15:11,102 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 05:15:11,103 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testDisabledTableMove_844815131 2023-07-19 05:15:11,103 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-19 05:15:11,104 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-19 05:15:11,108 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/.tmp/data/default/Group_testDisabledTableMove/f509fde207e38f84ac82ec4942287b0e 2023-07-19 05:15:11,109 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/.tmp/data/default/Group_testDisabledTableMove/b16bf5879a55caee1b87f53fcc6d23a8 2023-07-19 05:15:11,109 DEBUG [HFileArchiver-7] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/.tmp/data/default/Group_testDisabledTableMove/917f8b9bc38b5f5778b174536a4cbe59 2023-07-19 05:15:11,108 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/.tmp/data/default/Group_testDisabledTableMove/1e7d8c962a73a0bcd69a15738a42741b 2023-07-19 05:15:11,108 DEBUG [HFileArchiver-5] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/.tmp/data/default/Group_testDisabledTableMove/5b6394742e577a895614c9ea74a2a6dd 2023-07-19 05:15:11,110 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.MasterRpcServices(1230): Checking to see if procedure is done pid=158 2023-07-19 05:15:11,112 DEBUG [HFileArchiver-7] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/.tmp/data/default/Group_testDisabledTableMove/917f8b9bc38b5f5778b174536a4cbe59/f, FileablePath, hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/.tmp/data/default/Group_testDisabledTableMove/917f8b9bc38b5f5778b174536a4cbe59/recovered.edits] 2023-07-19 05:15:11,112 DEBUG [HFileArchiver-2] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/.tmp/data/default/Group_testDisabledTableMove/f509fde207e38f84ac82ec4942287b0e/f, FileablePath, hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/.tmp/data/default/Group_testDisabledTableMove/f509fde207e38f84ac82ec4942287b0e/recovered.edits] 2023-07-19 05:15:11,113 DEBUG [HFileArchiver-1] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/.tmp/data/default/Group_testDisabledTableMove/b16bf5879a55caee1b87f53fcc6d23a8/f, FileablePath, hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/.tmp/data/default/Group_testDisabledTableMove/b16bf5879a55caee1b87f53fcc6d23a8/recovered.edits] 2023-07-19 05:15:11,114 DEBUG [HFileArchiver-5] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/.tmp/data/default/Group_testDisabledTableMove/5b6394742e577a895614c9ea74a2a6dd/f, FileablePath, hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/.tmp/data/default/Group_testDisabledTableMove/5b6394742e577a895614c9ea74a2a6dd/recovered.edits] 2023-07-19 05:15:11,114 DEBUG [HFileArchiver-4] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/.tmp/data/default/Group_testDisabledTableMove/1e7d8c962a73a0bcd69a15738a42741b/f, FileablePath, hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/.tmp/data/default/Group_testDisabledTableMove/1e7d8c962a73a0bcd69a15738a42741b/recovered.edits] 2023-07-19 05:15:11,125 DEBUG [HFileArchiver-7] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/.tmp/data/default/Group_testDisabledTableMove/917f8b9bc38b5f5778b174536a4cbe59/recovered.edits/4.seqid to hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/archive/data/default/Group_testDisabledTableMove/917f8b9bc38b5f5778b174536a4cbe59/recovered.edits/4.seqid 2023-07-19 05:15:11,126 DEBUG [HFileArchiver-2] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/.tmp/data/default/Group_testDisabledTableMove/f509fde207e38f84ac82ec4942287b0e/recovered.edits/4.seqid to hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/archive/data/default/Group_testDisabledTableMove/f509fde207e38f84ac82ec4942287b0e/recovered.edits/4.seqid 2023-07-19 05:15:11,127 DEBUG [HFileArchiver-5] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/.tmp/data/default/Group_testDisabledTableMove/5b6394742e577a895614c9ea74a2a6dd/recovered.edits/4.seqid to hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/archive/data/default/Group_testDisabledTableMove/5b6394742e577a895614c9ea74a2a6dd/recovered.edits/4.seqid 2023-07-19 05:15:11,127 DEBUG [HFileArchiver-7] backup.HFileArchiver(596): Deleted hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/.tmp/data/default/Group_testDisabledTableMove/917f8b9bc38b5f5778b174536a4cbe59 2023-07-19 05:15:11,127 DEBUG [HFileArchiver-2] backup.HFileArchiver(596): Deleted hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/.tmp/data/default/Group_testDisabledTableMove/f509fde207e38f84ac82ec4942287b0e 2023-07-19 05:15:11,127 DEBUG [HFileArchiver-1] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/.tmp/data/default/Group_testDisabledTableMove/b16bf5879a55caee1b87f53fcc6d23a8/recovered.edits/4.seqid to hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/archive/data/default/Group_testDisabledTableMove/b16bf5879a55caee1b87f53fcc6d23a8/recovered.edits/4.seqid 2023-07-19 05:15:11,128 DEBUG [HFileArchiver-5] backup.HFileArchiver(596): Deleted hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/.tmp/data/default/Group_testDisabledTableMove/5b6394742e577a895614c9ea74a2a6dd 2023-07-19 05:15:11,128 DEBUG [HFileArchiver-1] backup.HFileArchiver(596): Deleted hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/.tmp/data/default/Group_testDisabledTableMove/b16bf5879a55caee1b87f53fcc6d23a8 2023-07-19 05:15:11,129 DEBUG [HFileArchiver-4] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/.tmp/data/default/Group_testDisabledTableMove/1e7d8c962a73a0bcd69a15738a42741b/recovered.edits/4.seqid to hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/archive/data/default/Group_testDisabledTableMove/1e7d8c962a73a0bcd69a15738a42741b/recovered.edits/4.seqid 2023-07-19 05:15:11,129 DEBUG [HFileArchiver-4] backup.HFileArchiver(596): Deleted hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/.tmp/data/default/Group_testDisabledTableMove/1e7d8c962a73a0bcd69a15738a42741b 2023-07-19 05:15:11,129 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(328): Archived Group_testDisabledTableMove regions 2023-07-19 05:15:11,132 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=158, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=Group_testDisabledTableMove 2023-07-19 05:15:11,134 WARN [PEWorker-4] procedure.DeleteTableProcedure(384): Deleting some vestigial 5 rows of Group_testDisabledTableMove from hbase:meta 2023-07-19 05:15:11,139 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(421): Removing 'Group_testDisabledTableMove' descriptor. 2023-07-19 05:15:11,140 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=158, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=Group_testDisabledTableMove 2023-07-19 05:15:11,140 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(411): Removing 'Group_testDisabledTableMove' from region states. 2023-07-19 05:15:11,141 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testDisabledTableMove,,1689743709654.f509fde207e38f84ac82ec4942287b0e.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689743711140"}]},"ts":"9223372036854775807"} 2023-07-19 05:15:11,141 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testDisabledTableMove,aaaaa,1689743709654.5b6394742e577a895614c9ea74a2a6dd.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689743711140"}]},"ts":"9223372036854775807"} 2023-07-19 05:15:11,141 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testDisabledTableMove,i\\xBF\\x14i\\xBE,1689743709654.1e7d8c962a73a0bcd69a15738a42741b.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689743711140"}]},"ts":"9223372036854775807"} 2023-07-19 05:15:11,141 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testDisabledTableMove,r\\x1C\\xC7r\\x1B,1689743709654.b16bf5879a55caee1b87f53fcc6d23a8.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689743711140"}]},"ts":"9223372036854775807"} 2023-07-19 05:15:11,141 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testDisabledTableMove,zzzzz,1689743709654.917f8b9bc38b5f5778b174536a4cbe59.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689743711140"}]},"ts":"9223372036854775807"} 2023-07-19 05:15:11,143 INFO [PEWorker-4] hbase.MetaTableAccessor(1788): Deleted 5 regions from META 2023-07-19 05:15:11,143 DEBUG [PEWorker-4] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => f509fde207e38f84ac82ec4942287b0e, NAME => 'Group_testDisabledTableMove,,1689743709654.f509fde207e38f84ac82ec4942287b0e.', STARTKEY => '', ENDKEY => 'aaaaa'}, {ENCODED => 5b6394742e577a895614c9ea74a2a6dd, NAME => 'Group_testDisabledTableMove,aaaaa,1689743709654.5b6394742e577a895614c9ea74a2a6dd.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'}, {ENCODED => 1e7d8c962a73a0bcd69a15738a42741b, NAME => 'Group_testDisabledTableMove,i\xBF\x14i\xBE,1689743709654.1e7d8c962a73a0bcd69a15738a42741b.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'}, {ENCODED => b16bf5879a55caee1b87f53fcc6d23a8, NAME => 'Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689743709654.b16bf5879a55caee1b87f53fcc6d23a8.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'}, {ENCODED => 917f8b9bc38b5f5778b174536a4cbe59, NAME => 'Group_testDisabledTableMove,zzzzz,1689743709654.917f8b9bc38b5f5778b174536a4cbe59.', STARTKEY => 'zzzzz', ENDKEY => ''}] 2023-07-19 05:15:11,143 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(415): Marking 'Group_testDisabledTableMove' as deleted. 2023-07-19 05:15:11,143 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testDisabledTableMove","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1689743711143"}]},"ts":"9223372036854775807"} 2023-07-19 05:15:11,144 INFO [PEWorker-4] hbase.MetaTableAccessor(1658): Deleted table Group_testDisabledTableMove state from META 2023-07-19 05:15:11,146 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(130): Finished pid=158, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=Group_testDisabledTableMove 2023-07-19 05:15:11,147 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=158, state=SUCCESS; DeleteTableProcedure table=Group_testDisabledTableMove in 48 msec 2023-07-19 05:15:11,211 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.MasterRpcServices(1230): Checking to see if procedure is done pid=158 2023-07-19 05:15:11,211 INFO [Listener at localhost/38799] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: default:Group_testDisabledTableMove, procId: 158 completed 2023-07-19 05:15:11,214 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-19 05:15:11,214 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-19 05:15:11,215 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-19 05:15:11,215 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-19 05:15:11,215 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-19 05:15:11,216 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:41899, jenkins-hbase4.apache.org:41979] to rsgroup default 2023-07-19 05:15:11,218 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 05:15:11,219 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testDisabledTableMove_844815131 2023-07-19 05:15:11,219 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-19 05:15:11,219 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-19 05:15:11,221 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group Group_testDisabledTableMove_844815131, current retry=0 2023-07-19 05:15:11,221 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,41899,1689743683228, jenkins-hbase4.apache.org,41979,1689743683435] are moved back to Group_testDisabledTableMove_844815131 2023-07-19 05:15:11,221 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminServer(438): Move servers done: Group_testDisabledTableMove_844815131 => default 2023-07-19 05:15:11,221 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-19 05:15:11,222 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup Group_testDisabledTableMove_844815131 2023-07-19 05:15:11,225 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 05:15:11,225 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-19 05:15:11,226 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-19 05:15:11,231 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-19 05:15:11,231 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-19 05:15:11,232 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-19 05:15:11,232 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-19 05:15:11,232 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-19 05:15:11,232 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-19 05:15:11,233 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-19 05:15:11,236 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 05:15:11,236 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-19 05:15:11,238 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-19 05:15:11,240 INFO [Listener at localhost/38799] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-19 05:15:11,240 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-19 05:15:11,242 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 05:15:11,242 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-19 05:15:11,243 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-19 05:15:11,245 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-19 05:15:11,247 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-19 05:15:11,247 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-19 05:15:11,249 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:35853] to rsgroup master 2023-07-19 05:15:11,249 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:35853 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-19 05:15:11,249 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] ipc.CallRunner(144): callId: 959 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:46730 deadline: 1689744911249, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:35853 is either offline or it does not exist. 2023-07-19 05:15:11,249 WARN [Listener at localhost/38799] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:35853 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:35853 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-19 05:15:11,251 INFO [Listener at localhost/38799] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-19 05:15:11,251 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-19 05:15:11,252 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-19 05:15:11,252 INFO [Listener at localhost/38799] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:41899, jenkins-hbase4.apache.org:41979, jenkins-hbase4.apache.org:43237, jenkins-hbase4.apache.org:45681], Tables:[hbase:meta, hbase:namespace, unmovedTable, hbase:rsgroup, testRename], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-19 05:15:11,252 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-19 05:15:11,253 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-19 05:15:11,269 INFO [Listener at localhost/38799] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testDisabledTableMove Thread=513 (was 511) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_965506333_17 at /127.0.0.1:48592 [Waiting for operation #2] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x6043b73e-shared-pool-7 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x5b070797-shared-pool-25 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-573178852_17 at /127.0.0.1:37786 [Waiting for operation #5] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=809 (was 780) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=387 (was 377) - SystemLoadAverage LEAK? -, ProcessCount=171 (was 173), AvailableMemoryMB=5101 (was 3064) - AvailableMemoryMB LEAK? - 2023-07-19 05:15:11,270 WARN [Listener at localhost/38799] hbase.ResourceChecker(130): Thread=513 is superior to 500 2023-07-19 05:15:11,286 INFO [Listener at localhost/38799] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testRSGroupListDoesNotContainFailedTableCreation Thread=513, OpenFileDescriptor=809, MaxFileDescriptor=60000, SystemLoadAverage=387, ProcessCount=171, AvailableMemoryMB=5101 2023-07-19 05:15:11,287 WARN [Listener at localhost/38799] hbase.ResourceChecker(130): Thread=513 is superior to 500 2023-07-19 05:15:11,287 INFO [Listener at localhost/38799] rsgroup.TestRSGroupsBase(132): testRSGroupListDoesNotContainFailedTableCreation 2023-07-19 05:15:11,294 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-19 05:15:11,294 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-19 05:15:11,294 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-19 05:15:11,295 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-19 05:15:11,295 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-19 05:15:11,295 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-19 05:15:11,295 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-19 05:15:11,296 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-19 05:15:11,299 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 05:15:11,299 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-19 05:15:11,302 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-19 05:15:11,304 INFO [Listener at localhost/38799] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-19 05:15:11,305 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-19 05:15:11,307 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 05:15:11,308 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-19 05:15:11,309 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-19 05:15:11,313 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-19 05:15:11,315 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-19 05:15:11,315 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-19 05:15:11,317 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:35853] to rsgroup master 2023-07-19 05:15:11,318 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:35853 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-19 05:15:11,318 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] ipc.CallRunner(144): callId: 987 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:46730 deadline: 1689744911317, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:35853 is either offline or it does not exist. 2023-07-19 05:15:11,318 WARN [Listener at localhost/38799] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:35853 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:35853 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-19 05:15:11,320 INFO [Listener at localhost/38799] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-19 05:15:11,320 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-19 05:15:11,321 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-19 05:15:11,321 INFO [Listener at localhost/38799] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:41899, jenkins-hbase4.apache.org:41979, jenkins-hbase4.apache.org:43237, jenkins-hbase4.apache.org:45681], Tables:[hbase:meta, hbase:namespace, unmovedTable, hbase:rsgroup, testRename], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-19 05:15:11,322 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-19 05:15:11,322 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35853] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-19 05:15:11,322 INFO [Listener at localhost/38799] hbase.HBaseTestingUtility(1286): Shutting down minicluster 2023-07-19 05:15:11,322 INFO [Listener at localhost/38799] client.ConnectionImplementation(1979): Closing master protocol: MasterService 2023-07-19 05:15:11,323 DEBUG [Listener at localhost/38799] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x5c97849b to 127.0.0.1:54772 2023-07-19 05:15:11,323 DEBUG [Listener at localhost/38799] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-19 05:15:11,324 DEBUG [Listener at localhost/38799] util.JVMClusterUtil(237): Shutting down HBase Cluster 2023-07-19 05:15:11,324 DEBUG [Listener at localhost/38799] util.JVMClusterUtil(257): Found active master hash=811964, stopped=false 2023-07-19 05:15:11,324 DEBUG [Listener at localhost/38799] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint 2023-07-19 05:15:11,324 DEBUG [Listener at localhost/38799] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver 2023-07-19 05:15:11,324 INFO [Listener at localhost/38799] master.ServerManager(901): Cluster shutdown requested of master=jenkins-hbase4.apache.org,35853,1689743680958 2023-07-19 05:15:11,326 DEBUG [Listener at localhost/38799-EventThread] zookeeper.ZKWatcher(600): master:35853-0x1017c00e52c0000, quorum=127.0.0.1:54772, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-19 05:15:11,326 DEBUG [Listener at localhost/38799-EventThread] zookeeper.ZKWatcher(600): regionserver:43237-0x1017c00e52c000b, quorum=127.0.0.1:54772, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-19 05:15:11,326 DEBUG [Listener at localhost/38799-EventThread] zookeeper.ZKWatcher(600): regionserver:45681-0x1017c00e52c0001, quorum=127.0.0.1:54772, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-19 05:15:11,326 INFO [Listener at localhost/38799] procedure2.ProcedureExecutor(629): Stopping 2023-07-19 05:15:11,326 DEBUG [Listener at localhost/38799-EventThread] zookeeper.ZKWatcher(600): regionserver:41899-0x1017c00e52c0002, quorum=127.0.0.1:54772, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-19 05:15:11,326 DEBUG [Listener at localhost/38799-EventThread] zookeeper.ZKWatcher(600): master:35853-0x1017c00e52c0000, quorum=127.0.0.1:54772, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-19 05:15:11,326 DEBUG [Listener at localhost/38799-EventThread] zookeeper.ZKWatcher(600): regionserver:41979-0x1017c00e52c0003, quorum=127.0.0.1:54772, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-19 05:15:11,327 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:35853-0x1017c00e52c0000, quorum=127.0.0.1:54772, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-19 05:15:11,327 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:43237-0x1017c00e52c000b, quorum=127.0.0.1:54772, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-19 05:15:11,327 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:45681-0x1017c00e52c0001, quorum=127.0.0.1:54772, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-19 05:15:11,327 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:41899-0x1017c00e52c0002, quorum=127.0.0.1:54772, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-19 05:15:11,328 DEBUG [Listener at localhost/38799] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x10ac7693 to 127.0.0.1:54772 2023-07-19 05:15:11,328 DEBUG [Listener at localhost/38799] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-19 05:15:11,328 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:41979-0x1017c00e52c0003, quorum=127.0.0.1:54772, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-19 05:15:11,328 INFO [Listener at localhost/38799] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,45681,1689743683028' ***** 2023-07-19 05:15:11,328 INFO [Listener at localhost/38799] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-19 05:15:11,328 INFO [Listener at localhost/38799] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,41899,1689743683228' ***** 2023-07-19 05:15:11,328 INFO [Listener at localhost/38799] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-19 05:15:11,328 INFO [RS:0;jenkins-hbase4:45681] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-19 05:15:11,328 INFO [RS:1;jenkins-hbase4:41899] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-19 05:15:11,328 INFO [Listener at localhost/38799] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,41979,1689743683435' ***** 2023-07-19 05:15:11,329 INFO [Listener at localhost/38799] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-19 05:15:11,329 INFO [Listener at localhost/38799] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,43237,1689743687175' ***** 2023-07-19 05:15:11,329 INFO [Listener at localhost/38799] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-19 05:15:11,330 INFO [RS:3;jenkins-hbase4:43237] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-19 05:15:11,331 INFO [RS:2;jenkins-hbase4:41979] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-19 05:15:11,346 INFO [RS:1;jenkins-hbase4:41899] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@4bc6a9e2{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-19 05:15:11,346 INFO [RS:0;jenkins-hbase4:45681] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@2068cbfe{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-19 05:15:11,346 INFO [RS:3;jenkins-hbase4:43237] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@289fa920{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-19 05:15:11,346 INFO [RS:2;jenkins-hbase4:41979] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@6c3aed70{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-19 05:15:11,350 INFO [RS:2;jenkins-hbase4:41979] server.AbstractConnector(383): Stopped ServerConnector@6c3f6670{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-19 05:15:11,350 INFO [RS:0;jenkins-hbase4:45681] server.AbstractConnector(383): Stopped ServerConnector@2cb4cda5{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-19 05:15:11,350 INFO [RS:1;jenkins-hbase4:41899] server.AbstractConnector(383): Stopped ServerConnector@6b7406ba{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-19 05:15:11,350 INFO [RS:3;jenkins-hbase4:43237] server.AbstractConnector(383): Stopped ServerConnector@35efd609{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-19 05:15:11,351 INFO [RS:1;jenkins-hbase4:41899] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-19 05:15:11,351 INFO [RS:0;jenkins-hbase4:45681] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-19 05:15:11,351 INFO [RS:2;jenkins-hbase4:41979] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-19 05:15:11,351 INFO [RS:3;jenkins-hbase4:43237] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-19 05:15:11,352 INFO [RS:1;jenkins-hbase4:41899] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@454bace{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-19 05:15:11,353 INFO [RS:3;jenkins-hbase4:43237] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@34f7812e{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-19 05:15:11,353 INFO [RS:0;jenkins-hbase4:45681] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@b134c3c{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-19 05:15:11,353 INFO [RS:2;jenkins-hbase4:41979] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@5857c9af{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-19 05:15:11,354 INFO [RS:3;jenkins-hbase4:43237] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@14ac5f55{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/a5b3b8b6-cf4e-9609-f882-6066e741168c/hadoop.log.dir/,STOPPED} 2023-07-19 05:15:11,355 INFO [RS:0;jenkins-hbase4:45681] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@705c29b1{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/a5b3b8b6-cf4e-9609-f882-6066e741168c/hadoop.log.dir/,STOPPED} 2023-07-19 05:15:11,354 INFO [RS:1;jenkins-hbase4:41899] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@34863637{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/a5b3b8b6-cf4e-9609-f882-6066e741168c/hadoop.log.dir/,STOPPED} 2023-07-19 05:15:11,355 INFO [RS:2;jenkins-hbase4:41979] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@709df1b3{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/a5b3b8b6-cf4e-9609-f882-6066e741168c/hadoop.log.dir/,STOPPED} 2023-07-19 05:15:11,358 INFO [RS:0;jenkins-hbase4:45681] regionserver.HeapMemoryManager(220): Stopping 2023-07-19 05:15:11,358 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-19 05:15:11,359 INFO [RS:2;jenkins-hbase4:41979] regionserver.HeapMemoryManager(220): Stopping 2023-07-19 05:15:11,359 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-19 05:15:11,359 INFO [RS:2;jenkins-hbase4:41979] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-19 05:15:11,359 INFO [RS:2;jenkins-hbase4:41979] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-19 05:15:11,359 INFO [RS:2;jenkins-hbase4:41979] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,41979,1689743683435 2023-07-19 05:15:11,359 DEBUG [RS:2;jenkins-hbase4:41979] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x5a5ab720 to 127.0.0.1:54772 2023-07-19 05:15:11,359 DEBUG [RS:2;jenkins-hbase4:41979] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-19 05:15:11,359 INFO [RS:2;jenkins-hbase4:41979] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,41979,1689743683435; all regions closed. 2023-07-19 05:15:11,359 INFO [RS:0;jenkins-hbase4:45681] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-19 05:15:11,361 INFO [RS:0;jenkins-hbase4:45681] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-19 05:15:11,361 INFO [RS:0;jenkins-hbase4:45681] regionserver.HRegionServer(3305): Received CLOSE for 36230d99e1f0bd83eb4e5988724a475f 2023-07-19 05:15:11,361 INFO [RS:1;jenkins-hbase4:41899] regionserver.HeapMemoryManager(220): Stopping 2023-07-19 05:15:11,361 INFO [RS:0;jenkins-hbase4:45681] regionserver.HRegionServer(3305): Received CLOSE for c29fa493d7d1fcfaa46cb25b42ce4170 2023-07-19 05:15:11,361 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-19 05:15:11,361 INFO [RS:0;jenkins-hbase4:45681] regionserver.HRegionServer(3305): Received CLOSE for f6f6fceaa7e24dc750aa525625e896fa 2023-07-19 05:15:11,361 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 36230d99e1f0bd83eb4e5988724a475f, disabling compactions & flushes 2023-07-19 05:15:11,361 INFO [RS:1;jenkins-hbase4:41899] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-19 05:15:11,361 INFO [RS:3;jenkins-hbase4:43237] regionserver.HeapMemoryManager(220): Stopping 2023-07-19 05:15:11,361 INFO [RS:1;jenkins-hbase4:41899] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-19 05:15:11,362 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-19 05:15:11,361 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1689743686364.36230d99e1f0bd83eb4e5988724a475f. 2023-07-19 05:15:11,362 INFO [RS:3;jenkins-hbase4:43237] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-19 05:15:11,361 INFO [RS:0;jenkins-hbase4:45681] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,45681,1689743683028 2023-07-19 05:15:11,362 INFO [RS:3;jenkins-hbase4:43237] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-19 05:15:11,362 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1689743686364.36230d99e1f0bd83eb4e5988724a475f. 2023-07-19 05:15:11,362 INFO [RS:3;jenkins-hbase4:43237] regionserver.HRegionServer(3305): Received CLOSE for abc282e5f7835310e284bab60e5bb44a 2023-07-19 05:15:11,362 INFO [RS:1;jenkins-hbase4:41899] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,41899,1689743683228 2023-07-19 05:15:11,362 INFO [RS:3;jenkins-hbase4:43237] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,43237,1689743687175 2023-07-19 05:15:11,362 DEBUG [RS:1;jenkins-hbase4:41899] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x1ad1dd19 to 127.0.0.1:54772 2023-07-19 05:15:11,362 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1689743686364.36230d99e1f0bd83eb4e5988724a475f. after waiting 0 ms 2023-07-19 05:15:11,362 DEBUG [RS:0;jenkins-hbase4:45681] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x78dbec02 to 127.0.0.1:54772 2023-07-19 05:15:11,363 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1689743686364.36230d99e1f0bd83eb4e5988724a475f. 2023-07-19 05:15:11,363 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 36230d99e1f0bd83eb4e5988724a475f 1/1 column families, dataSize=27.06 KB heapSize=44.69 KB 2023-07-19 05:15:11,363 DEBUG [RS:1;jenkins-hbase4:41899] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-19 05:15:11,363 DEBUG [RS:3;jenkins-hbase4:43237] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x09433678 to 127.0.0.1:54772 2023-07-19 05:15:11,364 DEBUG [RS:3;jenkins-hbase4:43237] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-19 05:15:11,364 INFO [RS:3;jenkins-hbase4:43237] regionserver.HRegionServer(1474): Waiting on 1 regions to close 2023-07-19 05:15:11,363 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing abc282e5f7835310e284bab60e5bb44a, disabling compactions & flushes 2023-07-19 05:15:11,364 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region testRename,,1689743704039.abc282e5f7835310e284bab60e5bb44a. 2023-07-19 05:15:11,364 DEBUG [RS:3;jenkins-hbase4:43237] regionserver.HRegionServer(1478): Online Regions={abc282e5f7835310e284bab60e5bb44a=testRename,,1689743704039.abc282e5f7835310e284bab60e5bb44a.} 2023-07-19 05:15:11,364 INFO [RS:1;jenkins-hbase4:41899] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,41899,1689743683228; all regions closed. 2023-07-19 05:15:11,365 DEBUG [RS:3;jenkins-hbase4:43237] regionserver.HRegionServer(1504): Waiting on abc282e5f7835310e284bab60e5bb44a 2023-07-19 05:15:11,363 DEBUG [RS:0;jenkins-hbase4:45681] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-19 05:15:11,364 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on testRename,,1689743704039.abc282e5f7835310e284bab60e5bb44a. 2023-07-19 05:15:11,365 INFO [RS:0;jenkins-hbase4:45681] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-19 05:15:11,365 INFO [RS:0;jenkins-hbase4:45681] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-19 05:15:11,365 INFO [RS:0;jenkins-hbase4:45681] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-19 05:15:11,365 INFO [RS:0;jenkins-hbase4:45681] regionserver.HRegionServer(3305): Received CLOSE for 1588230740 2023-07-19 05:15:11,365 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on testRename,,1689743704039.abc282e5f7835310e284bab60e5bb44a. after waiting 0 ms 2023-07-19 05:15:11,365 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region testRename,,1689743704039.abc282e5f7835310e284bab60e5bb44a. 2023-07-19 05:15:11,372 INFO [RS:0;jenkins-hbase4:45681] regionserver.HRegionServer(1474): Waiting on 4 regions to close 2023-07-19 05:15:11,372 DEBUG [RS:0;jenkins-hbase4:45681] regionserver.HRegionServer(1478): Online Regions={36230d99e1f0bd83eb4e5988724a475f=hbase:rsgroup,,1689743686364.36230d99e1f0bd83eb4e5988724a475f., c29fa493d7d1fcfaa46cb25b42ce4170=unmovedTable,,1689743705703.c29fa493d7d1fcfaa46cb25b42ce4170., 1588230740=hbase:meta,,1.1588230740, f6f6fceaa7e24dc750aa525625e896fa=hbase:namespace,,1689743686152.f6f6fceaa7e24dc750aa525625e896fa.} 2023-07-19 05:15:11,372 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-19 05:15:11,372 DEBUG [RS:0;jenkins-hbase4:45681] regionserver.HRegionServer(1504): Waiting on 1588230740, 36230d99e1f0bd83eb4e5988724a475f, c29fa493d7d1fcfaa46cb25b42ce4170, f6f6fceaa7e24dc750aa525625e896fa 2023-07-19 05:15:11,372 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-19 05:15:11,373 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-19 05:15:11,373 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-19 05:15:11,373 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-19 05:15:11,373 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=77.76 KB heapSize=122.41 KB 2023-07-19 05:15:11,378 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-19 05:15:11,378 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-19 05:15:11,379 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-19 05:15:11,378 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-19 05:15:11,387 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/data/default/testRename/abc282e5f7835310e284bab60e5bb44a/recovered.edits/10.seqid, newMaxSeqId=10, maxSeqId=7 2023-07-19 05:15:11,388 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed testRename,,1689743704039.abc282e5f7835310e284bab60e5bb44a. 2023-07-19 05:15:11,388 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for abc282e5f7835310e284bab60e5bb44a: 2023-07-19 05:15:11,389 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed testRename,,1689743704039.abc282e5f7835310e284bab60e5bb44a. 2023-07-19 05:15:11,390 DEBUG [RS:1;jenkins-hbase4:41899] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/oldWALs 2023-07-19 05:15:11,390 DEBUG [RS:2;jenkins-hbase4:41979] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/oldWALs 2023-07-19 05:15:11,391 INFO [RS:1;jenkins-hbase4:41899] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C41899%2C1689743683228:(num 1689743685642) 2023-07-19 05:15:11,391 INFO [RS:2;jenkins-hbase4:41979] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C41979%2C1689743683435.meta:.meta(num 1689743685914) 2023-07-19 05:15:11,391 DEBUG [RS:1;jenkins-hbase4:41899] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-19 05:15:11,391 INFO [RS:1;jenkins-hbase4:41899] regionserver.LeaseManager(133): Closed leases 2023-07-19 05:15:11,399 INFO [RS:1;jenkins-hbase4:41899] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-07-19 05:15:11,399 INFO [RS:1;jenkins-hbase4:41899] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-19 05:15:11,399 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-19 05:15:11,399 INFO [RS:1;jenkins-hbase4:41899] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-19 05:15:11,399 INFO [RS:1;jenkins-hbase4:41899] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-19 05:15:11,407 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=27.06 KB at sequenceid=101 (bloomFilter=true), to=hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/data/hbase/rsgroup/36230d99e1f0bd83eb4e5988724a475f/.tmp/m/05d3f44ea15e401695c7b0c841687b61 2023-07-19 05:15:11,408 INFO [RS:1;jenkins-hbase4:41899] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:41899 2023-07-19 05:15:11,423 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 05d3f44ea15e401695c7b0c841687b61 2023-07-19 05:15:11,425 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/data/hbase/rsgroup/36230d99e1f0bd83eb4e5988724a475f/.tmp/m/05d3f44ea15e401695c7b0c841687b61 as hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/data/hbase/rsgroup/36230d99e1f0bd83eb4e5988724a475f/m/05d3f44ea15e401695c7b0c841687b61 2023-07-19 05:15:11,426 DEBUG [RS:2;jenkins-hbase4:41979] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/oldWALs 2023-07-19 05:15:11,426 INFO [RS:2;jenkins-hbase4:41979] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C41979%2C1689743683435:(num 1689743685642) 2023-07-19 05:15:11,426 DEBUG [RS:2;jenkins-hbase4:41979] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-19 05:15:11,426 INFO [RS:2;jenkins-hbase4:41979] regionserver.LeaseManager(133): Closed leases 2023-07-19 05:15:11,427 INFO [RS:2;jenkins-hbase4:41979] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-07-19 05:15:11,428 INFO [RS:2;jenkins-hbase4:41979] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-19 05:15:11,428 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=71.95 KB at sequenceid=214 (bloomFilter=false), to=hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/data/hbase/meta/1588230740/.tmp/info/ae5633aceb274a88b75c93c196f436a5 2023-07-19 05:15:11,428 INFO [RS:2;jenkins-hbase4:41979] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-19 05:15:11,428 INFO [RS:2;jenkins-hbase4:41979] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-19 05:15:11,428 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-19 05:15:11,430 INFO [RS:2;jenkins-hbase4:41979] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:41979 2023-07-19 05:15:11,435 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 05d3f44ea15e401695c7b0c841687b61 2023-07-19 05:15:11,435 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for ae5633aceb274a88b75c93c196f436a5 2023-07-19 05:15:11,435 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/data/hbase/rsgroup/36230d99e1f0bd83eb4e5988724a475f/m/05d3f44ea15e401695c7b0c841687b61, entries=28, sequenceid=101, filesize=6.1 K 2023-07-19 05:15:11,436 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~27.06 KB/27710, heapSize ~44.67 KB/45744, currentSize=0 B/0 for 36230d99e1f0bd83eb4e5988724a475f in 73ms, sequenceid=101, compaction requested=false 2023-07-19 05:15:11,444 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/data/hbase/rsgroup/36230d99e1f0bd83eb4e5988724a475f/recovered.edits/104.seqid, newMaxSeqId=104, maxSeqId=12 2023-07-19 05:15:11,445 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-19 05:15:11,446 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1689743686364.36230d99e1f0bd83eb4e5988724a475f. 2023-07-19 05:15:11,446 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 36230d99e1f0bd83eb4e5988724a475f: 2023-07-19 05:15:11,446 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:rsgroup,,1689743686364.36230d99e1f0bd83eb4e5988724a475f. 2023-07-19 05:15:11,446 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing c29fa493d7d1fcfaa46cb25b42ce4170, disabling compactions & flushes 2023-07-19 05:15:11,446 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region unmovedTable,,1689743705703.c29fa493d7d1fcfaa46cb25b42ce4170. 2023-07-19 05:15:11,447 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on unmovedTable,,1689743705703.c29fa493d7d1fcfaa46cb25b42ce4170. 2023-07-19 05:15:11,447 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on unmovedTable,,1689743705703.c29fa493d7d1fcfaa46cb25b42ce4170. after waiting 0 ms 2023-07-19 05:15:11,447 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region unmovedTable,,1689743705703.c29fa493d7d1fcfaa46cb25b42ce4170. 2023-07-19 05:15:11,453 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/data/default/unmovedTable/c29fa493d7d1fcfaa46cb25b42ce4170/recovered.edits/10.seqid, newMaxSeqId=10, maxSeqId=7 2023-07-19 05:15:11,454 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed unmovedTable,,1689743705703.c29fa493d7d1fcfaa46cb25b42ce4170. 2023-07-19 05:15:11,454 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=2 KB at sequenceid=214 (bloomFilter=false), to=hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/data/hbase/meta/1588230740/.tmp/rep_barrier/9cbacaae363045c899034b1bbfe873d7 2023-07-19 05:15:11,454 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for c29fa493d7d1fcfaa46cb25b42ce4170: 2023-07-19 05:15:11,454 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed unmovedTable,,1689743705703.c29fa493d7d1fcfaa46cb25b42ce4170. 2023-07-19 05:15:11,454 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing f6f6fceaa7e24dc750aa525625e896fa, disabling compactions & flushes 2023-07-19 05:15:11,454 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1689743686152.f6f6fceaa7e24dc750aa525625e896fa. 2023-07-19 05:15:11,454 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1689743686152.f6f6fceaa7e24dc750aa525625e896fa. 2023-07-19 05:15:11,454 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1689743686152.f6f6fceaa7e24dc750aa525625e896fa. after waiting 0 ms 2023-07-19 05:15:11,454 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1689743686152.f6f6fceaa7e24dc750aa525625e896fa. 2023-07-19 05:15:11,456 INFO [regionserver/jenkins-hbase4:0.Chore.1] hbase.ScheduledChore(146): Chore: CompactionChecker was stopped 2023-07-19 05:15:11,457 INFO [regionserver/jenkins-hbase4:0.Chore.1] hbase.ScheduledChore(146): Chore: MemstoreFlusherChore was stopped 2023-07-19 05:15:11,460 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/data/hbase/namespace/f6f6fceaa7e24dc750aa525625e896fa/recovered.edits/15.seqid, newMaxSeqId=15, maxSeqId=12 2023-07-19 05:15:11,461 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:namespace,,1689743686152.f6f6fceaa7e24dc750aa525625e896fa. 2023-07-19 05:15:11,461 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for f6f6fceaa7e24dc750aa525625e896fa: 2023-07-19 05:15:11,461 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:namespace,,1689743686152.f6f6fceaa7e24dc750aa525625e896fa. 2023-07-19 05:15:11,462 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 9cbacaae363045c899034b1bbfe873d7 2023-07-19 05:15:11,468 DEBUG [Listener at localhost/38799-EventThread] zookeeper.ZKWatcher(600): regionserver:43237-0x1017c00e52c000b, quorum=127.0.0.1:54772, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,41979,1689743683435 2023-07-19 05:15:11,468 DEBUG [Listener at localhost/38799-EventThread] zookeeper.ZKWatcher(600): regionserver:41899-0x1017c00e52c0002, quorum=127.0.0.1:54772, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,41979,1689743683435 2023-07-19 05:15:11,469 DEBUG [Listener at localhost/38799-EventThread] zookeeper.ZKWatcher(600): regionserver:41899-0x1017c00e52c0002, quorum=127.0.0.1:54772, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-19 05:15:11,468 DEBUG [Listener at localhost/38799-EventThread] zookeeper.ZKWatcher(600): regionserver:45681-0x1017c00e52c0001, quorum=127.0.0.1:54772, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,41979,1689743683435 2023-07-19 05:15:11,468 DEBUG [Listener at localhost/38799-EventThread] zookeeper.ZKWatcher(600): regionserver:41979-0x1017c00e52c0003, quorum=127.0.0.1:54772, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,41979,1689743683435 2023-07-19 05:15:11,469 DEBUG [Listener at localhost/38799-EventThread] zookeeper.ZKWatcher(600): regionserver:45681-0x1017c00e52c0001, quorum=127.0.0.1:54772, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-19 05:15:11,469 DEBUG [Listener at localhost/38799-EventThread] zookeeper.ZKWatcher(600): regionserver:41899-0x1017c00e52c0002, quorum=127.0.0.1:54772, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,41899,1689743683228 2023-07-19 05:15:11,469 DEBUG [Listener at localhost/38799-EventThread] zookeeper.ZKWatcher(600): master:35853-0x1017c00e52c0000, quorum=127.0.0.1:54772, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-19 05:15:11,469 DEBUG [Listener at localhost/38799-EventThread] zookeeper.ZKWatcher(600): regionserver:43237-0x1017c00e52c000b, quorum=127.0.0.1:54772, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-19 05:15:11,469 DEBUG [Listener at localhost/38799-EventThread] zookeeper.ZKWatcher(600): regionserver:43237-0x1017c00e52c000b, quorum=127.0.0.1:54772, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,41899,1689743683228 2023-07-19 05:15:11,469 DEBUG [Listener at localhost/38799-EventThread] zookeeper.ZKWatcher(600): regionserver:45681-0x1017c00e52c0001, quorum=127.0.0.1:54772, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,41899,1689743683228 2023-07-19 05:15:11,469 DEBUG [Listener at localhost/38799-EventThread] zookeeper.ZKWatcher(600): regionserver:41979-0x1017c00e52c0003, quorum=127.0.0.1:54772, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-19 05:15:11,469 DEBUG [Listener at localhost/38799-EventThread] zookeeper.ZKWatcher(600): regionserver:41979-0x1017c00e52c0003, quorum=127.0.0.1:54772, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,41899,1689743683228 2023-07-19 05:15:11,470 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,41979,1689743683435] 2023-07-19 05:15:11,470 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,41979,1689743683435; numProcessing=1 2023-07-19 05:15:11,472 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,41979,1689743683435 already deleted, retry=false 2023-07-19 05:15:11,472 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,41979,1689743683435 expired; onlineServers=3 2023-07-19 05:15:11,472 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,41899,1689743683228] 2023-07-19 05:15:11,472 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,41899,1689743683228; numProcessing=2 2023-07-19 05:15:11,474 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,41899,1689743683228 already deleted, retry=false 2023-07-19 05:15:11,474 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,41899,1689743683228 expired; onlineServers=2 2023-07-19 05:15:11,474 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=3.81 KB at sequenceid=214 (bloomFilter=false), to=hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/data/hbase/meta/1588230740/.tmp/table/8514f5352d1a482cbf8b8f4a7c40333f 2023-07-19 05:15:11,481 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 8514f5352d1a482cbf8b8f4a7c40333f 2023-07-19 05:15:11,482 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/data/hbase/meta/1588230740/.tmp/info/ae5633aceb274a88b75c93c196f436a5 as hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/data/hbase/meta/1588230740/info/ae5633aceb274a88b75c93c196f436a5 2023-07-19 05:15:11,487 INFO [regionserver/jenkins-hbase4:0.Chore.1] hbase.ScheduledChore(146): Chore: MemstoreFlusherChore was stopped 2023-07-19 05:15:11,487 INFO [regionserver/jenkins-hbase4:0.Chore.1] hbase.ScheduledChore(146): Chore: CompactionChecker was stopped 2023-07-19 05:15:11,496 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for ae5633aceb274a88b75c93c196f436a5 2023-07-19 05:15:11,496 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/data/hbase/meta/1588230740/info/ae5633aceb274a88b75c93c196f436a5, entries=97, sequenceid=214, filesize=16.0 K 2023-07-19 05:15:11,497 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/data/hbase/meta/1588230740/.tmp/rep_barrier/9cbacaae363045c899034b1bbfe873d7 as hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/data/hbase/meta/1588230740/rep_barrier/9cbacaae363045c899034b1bbfe873d7 2023-07-19 05:15:11,504 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 9cbacaae363045c899034b1bbfe873d7 2023-07-19 05:15:11,504 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/data/hbase/meta/1588230740/rep_barrier/9cbacaae363045c899034b1bbfe873d7, entries=18, sequenceid=214, filesize=6.9 K 2023-07-19 05:15:11,505 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/data/hbase/meta/1588230740/.tmp/table/8514f5352d1a482cbf8b8f4a7c40333f as hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/data/hbase/meta/1588230740/table/8514f5352d1a482cbf8b8f4a7c40333f 2023-07-19 05:15:11,515 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 8514f5352d1a482cbf8b8f4a7c40333f 2023-07-19 05:15:11,515 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/data/hbase/meta/1588230740/table/8514f5352d1a482cbf8b8f4a7c40333f, entries=27, sequenceid=214, filesize=7.2 K 2023-07-19 05:15:11,516 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~77.76 KB/79623, heapSize ~122.36 KB/125296, currentSize=0 B/0 for 1588230740 in 143ms, sequenceid=214, compaction requested=false 2023-07-19 05:15:11,531 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/data/hbase/meta/1588230740/recovered.edits/217.seqid, newMaxSeqId=217, maxSeqId=19 2023-07-19 05:15:11,532 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-19 05:15:11,532 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-19 05:15:11,532 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-19 05:15:11,532 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:meta,,1.1588230740 2023-07-19 05:15:11,565 INFO [RS:3;jenkins-hbase4:43237] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,43237,1689743687175; all regions closed. 2023-07-19 05:15:11,571 DEBUG [RS:3;jenkins-hbase4:43237] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/oldWALs 2023-07-19 05:15:11,571 INFO [RS:3;jenkins-hbase4:43237] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C43237%2C1689743687175:(num 1689743687578) 2023-07-19 05:15:11,571 DEBUG [RS:3;jenkins-hbase4:43237] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-19 05:15:11,571 INFO [RS:3;jenkins-hbase4:43237] regionserver.LeaseManager(133): Closed leases 2023-07-19 05:15:11,572 INFO [RS:3;jenkins-hbase4:43237] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-07-19 05:15:11,572 INFO [RS:3;jenkins-hbase4:43237] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-19 05:15:11,572 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-19 05:15:11,572 INFO [RS:3;jenkins-hbase4:43237] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-19 05:15:11,572 INFO [RS:3;jenkins-hbase4:43237] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-19 05:15:11,573 INFO [RS:0;jenkins-hbase4:45681] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,45681,1689743683028; all regions closed. 2023-07-19 05:15:11,573 INFO [RS:3;jenkins-hbase4:43237] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:43237 2023-07-19 05:15:11,577 DEBUG [Listener at localhost/38799-EventThread] zookeeper.ZKWatcher(600): master:35853-0x1017c00e52c0000, quorum=127.0.0.1:54772, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-19 05:15:11,577 DEBUG [Listener at localhost/38799-EventThread] zookeeper.ZKWatcher(600): regionserver:43237-0x1017c00e52c000b, quorum=127.0.0.1:54772, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,43237,1689743687175 2023-07-19 05:15:11,577 DEBUG [Listener at localhost/38799-EventThread] zookeeper.ZKWatcher(600): regionserver:45681-0x1017c00e52c0001, quorum=127.0.0.1:54772, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,43237,1689743687175 2023-07-19 05:15:11,578 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,43237,1689743687175] 2023-07-19 05:15:11,578 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,43237,1689743687175; numProcessing=3 2023-07-19 05:15:11,580 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,43237,1689743687175 already deleted, retry=false 2023-07-19 05:15:11,580 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,43237,1689743687175 expired; onlineServers=1 2023-07-19 05:15:11,583 DEBUG [RS:0;jenkins-hbase4:45681] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/oldWALs 2023-07-19 05:15:11,583 INFO [RS:0;jenkins-hbase4:45681] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C45681%2C1689743683028.meta:.meta(num 1689743688347) 2023-07-19 05:15:11,590 DEBUG [RS:0;jenkins-hbase4:45681] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/oldWALs 2023-07-19 05:15:11,590 INFO [RS:0;jenkins-hbase4:45681] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C45681%2C1689743683028:(num 1689743685642) 2023-07-19 05:15:11,591 DEBUG [RS:0;jenkins-hbase4:45681] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-19 05:15:11,591 INFO [RS:0;jenkins-hbase4:45681] regionserver.LeaseManager(133): Closed leases 2023-07-19 05:15:11,591 INFO [RS:0;jenkins-hbase4:45681] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-07-19 05:15:11,591 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-19 05:15:11,592 INFO [RS:0;jenkins-hbase4:45681] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:45681 2023-07-19 05:15:11,594 DEBUG [Listener at localhost/38799-EventThread] zookeeper.ZKWatcher(600): regionserver:45681-0x1017c00e52c0001, quorum=127.0.0.1:54772, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,45681,1689743683028 2023-07-19 05:15:11,594 DEBUG [Listener at localhost/38799-EventThread] zookeeper.ZKWatcher(600): master:35853-0x1017c00e52c0000, quorum=127.0.0.1:54772, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-19 05:15:11,597 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,45681,1689743683028] 2023-07-19 05:15:11,597 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,45681,1689743683028; numProcessing=4 2023-07-19 05:15:11,598 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,45681,1689743683028 already deleted, retry=false 2023-07-19 05:15:11,598 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,45681,1689743683028 expired; onlineServers=0 2023-07-19 05:15:11,598 INFO [RegionServerTracker-0] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,35853,1689743680958' ***** 2023-07-19 05:15:11,598 INFO [RegionServerTracker-0] regionserver.HRegionServer(2311): STOPPED: Cluster shutdown set; onlineServer=0 2023-07-19 05:15:11,599 DEBUG [M:0;jenkins-hbase4:35853] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@41316987, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-19 05:15:11,599 INFO [M:0;jenkins-hbase4:35853] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-19 05:15:11,601 DEBUG [Listener at localhost/38799-EventThread] zookeeper.ZKWatcher(600): master:35853-0x1017c00e52c0000, quorum=127.0.0.1:54772, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/master 2023-07-19 05:15:11,601 DEBUG [Listener at localhost/38799-EventThread] zookeeper.ZKWatcher(600): master:35853-0x1017c00e52c0000, quorum=127.0.0.1:54772, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-19 05:15:11,601 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:35853-0x1017c00e52c0000, quorum=127.0.0.1:54772, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-19 05:15:11,601 INFO [M:0;jenkins-hbase4:35853] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@b4df03b{master,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/master} 2023-07-19 05:15:11,602 INFO [M:0;jenkins-hbase4:35853] server.AbstractConnector(383): Stopped ServerConnector@63cd509d{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-19 05:15:11,602 INFO [M:0;jenkins-hbase4:35853] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-19 05:15:11,603 INFO [M:0;jenkins-hbase4:35853] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@6d4efddd{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-19 05:15:11,603 INFO [M:0;jenkins-hbase4:35853] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@44044822{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/a5b3b8b6-cf4e-9609-f882-6066e741168c/hadoop.log.dir/,STOPPED} 2023-07-19 05:15:11,604 INFO [M:0;jenkins-hbase4:35853] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,35853,1689743680958 2023-07-19 05:15:11,604 INFO [M:0;jenkins-hbase4:35853] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,35853,1689743680958; all regions closed. 2023-07-19 05:15:11,604 DEBUG [M:0;jenkins-hbase4:35853] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-19 05:15:11,604 INFO [M:0;jenkins-hbase4:35853] master.HMaster(1491): Stopping master jetty server 2023-07-19 05:15:11,604 INFO [M:0;jenkins-hbase4:35853] server.AbstractConnector(383): Stopped ServerConnector@7a62212f{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-19 05:15:11,605 DEBUG [M:0;jenkins-hbase4:35853] cleaner.LogCleaner(198): Cancelling LogCleaner 2023-07-19 05:15:11,605 WARN [OldWALsCleaner-0] cleaner.LogCleaner(186): Interrupted while cleaning old WALs, will try to clean it next round. Exiting. 2023-07-19 05:15:11,605 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1689743685256] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1689743685256,5,FailOnTimeoutGroup] 2023-07-19 05:15:11,605 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1689743685255] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1689743685255,5,FailOnTimeoutGroup] 2023-07-19 05:15:11,605 DEBUG [M:0;jenkins-hbase4:35853] cleaner.HFileCleaner(317): Stopping file delete threads 2023-07-19 05:15:11,605 INFO [M:0;jenkins-hbase4:35853] master.MasterMobCompactionThread(168): Waiting for Mob Compaction Thread to finish... 2023-07-19 05:15:11,605 INFO [M:0;jenkins-hbase4:35853] master.MasterMobCompactionThread(168): Waiting for Region Server Mob Compaction Thread to finish... 2023-07-19 05:15:11,605 INFO [M:0;jenkins-hbase4:35853] hbase.ChoreService(369): Chore service for: master/jenkins-hbase4:0 had [] on shutdown 2023-07-19 05:15:11,605 DEBUG [M:0;jenkins-hbase4:35853] master.HMaster(1512): Stopping service threads 2023-07-19 05:15:11,605 INFO [M:0;jenkins-hbase4:35853] procedure2.RemoteProcedureDispatcher(119): Stopping procedure remote dispatcher 2023-07-19 05:15:11,606 ERROR [M:0;jenkins-hbase4:35853] procedure2.ProcedureExecutor(653): ThreadGroup java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] contains running threads; null: See STDOUT java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] Thread[HFileArchiver-1,5,PEWorkerGroup] Thread[HFileArchiver-2,5,PEWorkerGroup] Thread[HFileArchiver-3,5,PEWorkerGroup] Thread[HFileArchiver-4,5,PEWorkerGroup] Thread[HFileArchiver-5,5,PEWorkerGroup] Thread[HFileArchiver-6,5,PEWorkerGroup] Thread[HFileArchiver-7,5,PEWorkerGroup] Thread[HFileArchiver-8,5,PEWorkerGroup] 2023-07-19 05:15:11,606 INFO [M:0;jenkins-hbase4:35853] region.RegionProcedureStore(113): Stopping the Region Procedure Store, isAbort=false 2023-07-19 05:15:11,606 DEBUG [normalizer-worker-0] normalizer.RegionNormalizerWorker(174): interrupt detected. terminating. 2023-07-19 05:15:11,607 DEBUG [M:0;jenkins-hbase4:35853] zookeeper.ZKUtil(398): master:35853-0x1017c00e52c0000, quorum=127.0.0.1:54772, baseZNode=/hbase Unable to get data of znode /hbase/master because node does not exist (not an error) 2023-07-19 05:15:11,607 WARN [M:0;jenkins-hbase4:35853] master.ActiveMasterManager(326): Failed get of master address: java.io.IOException: Can't get master address from ZooKeeper; znode data == null 2023-07-19 05:15:11,607 INFO [M:0;jenkins-hbase4:35853] assignment.AssignmentManager(315): Stopping assignment manager 2023-07-19 05:15:11,607 INFO [M:0;jenkins-hbase4:35853] region.MasterRegion(167): Closing local region {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, isAbort=false 2023-07-19 05:15:11,607 DEBUG [M:0;jenkins-hbase4:35853] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-07-19 05:15:11,607 INFO [M:0;jenkins-hbase4:35853] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-19 05:15:11,607 DEBUG [M:0;jenkins-hbase4:35853] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-19 05:15:11,607 DEBUG [M:0;jenkins-hbase4:35853] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-07-19 05:15:11,607 DEBUG [M:0;jenkins-hbase4:35853] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-19 05:15:11,608 INFO [M:0;jenkins-hbase4:35853] regionserver.HRegion(2745): Flushing 1595e783b53d99cd5eef43b6debb2682 1/1 column families, dataSize=528.71 KB heapSize=632.86 KB 2023-07-19 05:15:11,626 INFO [M:0;jenkins-hbase4:35853] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=528.71 KB at sequenceid=1176 (bloomFilter=true), to=hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/a89bbd4b177a4dcda0325a95529bd255 2023-07-19 05:15:11,632 DEBUG [M:0;jenkins-hbase4:35853] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/a89bbd4b177a4dcda0325a95529bd255 as hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/a89bbd4b177a4dcda0325a95529bd255 2023-07-19 05:15:11,638 INFO [M:0;jenkins-hbase4:35853] regionserver.HStore(1080): Added hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/a89bbd4b177a4dcda0325a95529bd255, entries=157, sequenceid=1176, filesize=27.6 K 2023-07-19 05:15:11,639 INFO [M:0;jenkins-hbase4:35853] regionserver.HRegion(2948): Finished flush of dataSize ~528.71 KB/541396, heapSize ~632.84 KB/648032, currentSize=0 B/0 for 1595e783b53d99cd5eef43b6debb2682 in 31ms, sequenceid=1176, compaction requested=false 2023-07-19 05:15:11,642 INFO [M:0;jenkins-hbase4:35853] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-19 05:15:11,642 DEBUG [M:0;jenkins-hbase4:35853] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-19 05:15:11,647 INFO [master:store-WAL-Roller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-19 05:15:11,647 INFO [M:0;jenkins-hbase4:35853] flush.MasterFlushTableProcedureManager(83): stop: server shutting down. 2023-07-19 05:15:11,648 INFO [M:0;jenkins-hbase4:35853] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:35853 2023-07-19 05:15:11,649 DEBUG [M:0;jenkins-hbase4:35853] zookeeper.RecoverableZooKeeper(172): Node /hbase/rs/jenkins-hbase4.apache.org,35853,1689743680958 already deleted, retry=false 2023-07-19 05:15:11,928 DEBUG [Listener at localhost/38799-EventThread] zookeeper.ZKWatcher(600): master:35853-0x1017c00e52c0000, quorum=127.0.0.1:54772, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-19 05:15:11,928 INFO [M:0;jenkins-hbase4:35853] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,35853,1689743680958; zookeeper connection closed. 2023-07-19 05:15:11,928 DEBUG [Listener at localhost/38799-EventThread] zookeeper.ZKWatcher(600): master:35853-0x1017c00e52c0000, quorum=127.0.0.1:54772, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-19 05:15:11,988 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-hbase.properties,hadoop-metrics2.properties 2023-07-19 05:15:12,029 DEBUG [Listener at localhost/38799-EventThread] zookeeper.ZKWatcher(600): regionserver:45681-0x1017c00e52c0001, quorum=127.0.0.1:54772, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-19 05:15:12,029 INFO [RS:0;jenkins-hbase4:45681] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,45681,1689743683028; zookeeper connection closed. 2023-07-19 05:15:12,029 DEBUG [Listener at localhost/38799-EventThread] zookeeper.ZKWatcher(600): regionserver:45681-0x1017c00e52c0001, quorum=127.0.0.1:54772, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-19 05:15:12,029 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@4ec061b1] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@4ec061b1 2023-07-19 05:15:12,129 DEBUG [Listener at localhost/38799-EventThread] zookeeper.ZKWatcher(600): regionserver:43237-0x1017c00e52c000b, quorum=127.0.0.1:54772, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-19 05:15:12,129 INFO [RS:3;jenkins-hbase4:43237] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,43237,1689743687175; zookeeper connection closed. 2023-07-19 05:15:12,129 DEBUG [Listener at localhost/38799-EventThread] zookeeper.ZKWatcher(600): regionserver:43237-0x1017c00e52c000b, quorum=127.0.0.1:54772, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-19 05:15:12,129 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@6dc56a71] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@6dc56a71 2023-07-19 05:15:12,229 DEBUG [Listener at localhost/38799-EventThread] zookeeper.ZKWatcher(600): regionserver:41979-0x1017c00e52c0003, quorum=127.0.0.1:54772, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-19 05:15:12,229 INFO [RS:2;jenkins-hbase4:41979] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,41979,1689743683435; zookeeper connection closed. 2023-07-19 05:15:12,229 DEBUG [Listener at localhost/38799-EventThread] zookeeper.ZKWatcher(600): regionserver:41979-0x1017c00e52c0003, quorum=127.0.0.1:54772, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-19 05:15:12,230 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@4849b0ca] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@4849b0ca 2023-07-19 05:15:12,329 DEBUG [Listener at localhost/38799-EventThread] zookeeper.ZKWatcher(600): regionserver:41899-0x1017c00e52c0002, quorum=127.0.0.1:54772, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-19 05:15:12,330 INFO [RS:1;jenkins-hbase4:41899] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,41899,1689743683228; zookeeper connection closed. 2023-07-19 05:15:12,330 DEBUG [Listener at localhost/38799-EventThread] zookeeper.ZKWatcher(600): regionserver:41899-0x1017c00e52c0002, quorum=127.0.0.1:54772, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-19 05:15:12,330 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@1160e217] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@1160e217 2023-07-19 05:15:12,330 INFO [Listener at localhost/38799] util.JVMClusterUtil(335): Shutdown of 1 master(s) and 4 regionserver(s) complete 2023-07-19 05:15:12,331 WARN [Listener at localhost/38799] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-19 05:15:12,334 INFO [Listener at localhost/38799] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-19 05:15:12,438 WARN [BP-1580366368-172.31.14.131-1689743677571 heartbeating to localhost/127.0.0.1:34189] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-19 05:15:12,438 WARN [BP-1580366368-172.31.14.131-1689743677571 heartbeating to localhost/127.0.0.1:34189] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1580366368-172.31.14.131-1689743677571 (Datanode Uuid cc806028-c72f-4b87-ae2e-65a60f2f2519) service to localhost/127.0.0.1:34189 2023-07-19 05:15:12,440 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/a5b3b8b6-cf4e-9609-f882-6066e741168c/cluster_bf8aab3d-fc29-3d9a-7b1d-919a6995935e/dfs/data/data5/current/BP-1580366368-172.31.14.131-1689743677571] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-19 05:15:12,440 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/a5b3b8b6-cf4e-9609-f882-6066e741168c/cluster_bf8aab3d-fc29-3d9a-7b1d-919a6995935e/dfs/data/data6/current/BP-1580366368-172.31.14.131-1689743677571] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-19 05:15:12,442 WARN [Listener at localhost/38799] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-19 05:15:12,444 INFO [Listener at localhost/38799] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-19 05:15:12,447 WARN [BP-1580366368-172.31.14.131-1689743677571 heartbeating to localhost/127.0.0.1:34189] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-19 05:15:12,447 WARN [BP-1580366368-172.31.14.131-1689743677571 heartbeating to localhost/127.0.0.1:34189] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1580366368-172.31.14.131-1689743677571 (Datanode Uuid ee2087e9-74e5-4fa9-aee9-e4c7a687ba43) service to localhost/127.0.0.1:34189 2023-07-19 05:15:12,447 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/a5b3b8b6-cf4e-9609-f882-6066e741168c/cluster_bf8aab3d-fc29-3d9a-7b1d-919a6995935e/dfs/data/data3/current/BP-1580366368-172.31.14.131-1689743677571] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-19 05:15:12,448 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/a5b3b8b6-cf4e-9609-f882-6066e741168c/cluster_bf8aab3d-fc29-3d9a-7b1d-919a6995935e/dfs/data/data4/current/BP-1580366368-172.31.14.131-1689743677571] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-19 05:15:12,448 WARN [Listener at localhost/38799] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-19 05:15:12,455 INFO [Listener at localhost/38799] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-19 05:15:12,558 WARN [BP-1580366368-172.31.14.131-1689743677571 heartbeating to localhost/127.0.0.1:34189] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-19 05:15:12,558 WARN [BP-1580366368-172.31.14.131-1689743677571 heartbeating to localhost/127.0.0.1:34189] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1580366368-172.31.14.131-1689743677571 (Datanode Uuid 328eaa33-c8d1-46ed-94e4-2de79e5a106e) service to localhost/127.0.0.1:34189 2023-07-19 05:15:12,558 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/a5b3b8b6-cf4e-9609-f882-6066e741168c/cluster_bf8aab3d-fc29-3d9a-7b1d-919a6995935e/dfs/data/data1/current/BP-1580366368-172.31.14.131-1689743677571] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-19 05:15:12,559 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/a5b3b8b6-cf4e-9609-f882-6066e741168c/cluster_bf8aab3d-fc29-3d9a-7b1d-919a6995935e/dfs/data/data2/current/BP-1580366368-172.31.14.131-1689743677571] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-19 05:15:12,587 INFO [Listener at localhost/38799] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-19 05:15:12,706 INFO [Listener at localhost/38799] zookeeper.MiniZooKeeperCluster(344): Shutdown MiniZK cluster with all ZK servers 2023-07-19 05:15:12,754 INFO [Listener at localhost/38799] hbase.HBaseTestingUtility(1293): Minicluster is down 2023-07-19 05:15:12,754 INFO [Listener at localhost/38799] hbase.HBaseTestingUtility(1068): Starting up minicluster with option: StartMiniClusterOption{numMasters=1, masterClass=null, numRegionServers=3, rsPorts=, rsClass=null, numDataNodes=3, dataNodeHosts=null, numZkServers=1, createRootDir=false, createWALDir=false} 2023-07-19 05:15:12,754 INFO [Listener at localhost/38799] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.log.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/a5b3b8b6-cf4e-9609-f882-6066e741168c/hadoop.log.dir so I do NOT create it in target/test-data/29889217-1cb1-ea8b-5ca7-d011d0feb9b8 2023-07-19 05:15:12,754 INFO [Listener at localhost/38799] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.tmp.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/a5b3b8b6-cf4e-9609-f882-6066e741168c/hadoop.tmp.dir so I do NOT create it in target/test-data/29889217-1cb1-ea8b-5ca7-d011d0feb9b8 2023-07-19 05:15:12,754 INFO [Listener at localhost/38799] hbase.HBaseZKTestingUtility(82): Created new mini-cluster data directory: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/29889217-1cb1-ea8b-5ca7-d011d0feb9b8/cluster_a40a042b-8387-eb97-eb03-b118443e501d, deleteOnExit=true 2023-07-19 05:15:12,754 INFO [Listener at localhost/38799] hbase.HBaseTestingUtility(1082): STARTING DFS 2023-07-19 05:15:12,755 INFO [Listener at localhost/38799] hbase.HBaseTestingUtility(772): Setting test.cache.data to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/29889217-1cb1-ea8b-5ca7-d011d0feb9b8/test.cache.data in system properties and HBase conf 2023-07-19 05:15:12,755 INFO [Listener at localhost/38799] hbase.HBaseTestingUtility(772): Setting hadoop.tmp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/29889217-1cb1-ea8b-5ca7-d011d0feb9b8/hadoop.tmp.dir in system properties and HBase conf 2023-07-19 05:15:12,755 INFO [Listener at localhost/38799] hbase.HBaseTestingUtility(772): Setting hadoop.log.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/29889217-1cb1-ea8b-5ca7-d011d0feb9b8/hadoop.log.dir in system properties and HBase conf 2023-07-19 05:15:12,755 INFO [Listener at localhost/38799] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.local.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/29889217-1cb1-ea8b-5ca7-d011d0feb9b8/mapreduce.cluster.local.dir in system properties and HBase conf 2023-07-19 05:15:12,755 INFO [Listener at localhost/38799] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.temp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/29889217-1cb1-ea8b-5ca7-d011d0feb9b8/mapreduce.cluster.temp.dir in system properties and HBase conf 2023-07-19 05:15:12,755 INFO [Listener at localhost/38799] hbase.HBaseTestingUtility(759): read short circuit is OFF 2023-07-19 05:15:12,755 DEBUG [Listener at localhost/38799] fs.HFileSystem(308): The file system is not a DistributedFileSystem. Skipping on block location reordering 2023-07-19 05:15:12,755 INFO [Listener at localhost/38799] hbase.HBaseTestingUtility(772): Setting yarn.node-labels.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/29889217-1cb1-ea8b-5ca7-d011d0feb9b8/yarn.node-labels.fs-store.root-dir in system properties and HBase conf 2023-07-19 05:15:12,756 INFO [Listener at localhost/38799] hbase.HBaseTestingUtility(772): Setting yarn.node-attribute.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/29889217-1cb1-ea8b-5ca7-d011d0feb9b8/yarn.node-attribute.fs-store.root-dir in system properties and HBase conf 2023-07-19 05:15:12,756 INFO [Listener at localhost/38799] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.log-dirs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/29889217-1cb1-ea8b-5ca7-d011d0feb9b8/yarn.nodemanager.log-dirs in system properties and HBase conf 2023-07-19 05:15:12,756 INFO [Listener at localhost/38799] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/29889217-1cb1-ea8b-5ca7-d011d0feb9b8/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-07-19 05:15:12,756 INFO [Listener at localhost/38799] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.active-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/29889217-1cb1-ea8b-5ca7-d011d0feb9b8/yarn.timeline-service.entity-group-fs-store.active-dir in system properties and HBase conf 2023-07-19 05:15:12,756 INFO [Listener at localhost/38799] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.done-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/29889217-1cb1-ea8b-5ca7-d011d0feb9b8/yarn.timeline-service.entity-group-fs-store.done-dir in system properties and HBase conf 2023-07-19 05:15:12,756 INFO [Listener at localhost/38799] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/29889217-1cb1-ea8b-5ca7-d011d0feb9b8/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-07-19 05:15:12,756 INFO [Listener at localhost/38799] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/29889217-1cb1-ea8b-5ca7-d011d0feb9b8/dfs.journalnode.edits.dir in system properties and HBase conf 2023-07-19 05:15:12,756 INFO [Listener at localhost/38799] hbase.HBaseTestingUtility(772): Setting dfs.datanode.shared.file.descriptor.paths to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/29889217-1cb1-ea8b-5ca7-d011d0feb9b8/dfs.datanode.shared.file.descriptor.paths in system properties and HBase conf 2023-07-19 05:15:12,756 INFO [Listener at localhost/38799] hbase.HBaseTestingUtility(772): Setting nfs.dump.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/29889217-1cb1-ea8b-5ca7-d011d0feb9b8/nfs.dump.dir in system properties and HBase conf 2023-07-19 05:15:12,756 INFO [Listener at localhost/38799] hbase.HBaseTestingUtility(772): Setting java.io.tmpdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/29889217-1cb1-ea8b-5ca7-d011d0feb9b8/java.io.tmpdir in system properties and HBase conf 2023-07-19 05:15:12,757 INFO [Listener at localhost/38799] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/29889217-1cb1-ea8b-5ca7-d011d0feb9b8/dfs.journalnode.edits.dir in system properties and HBase conf 2023-07-19 05:15:12,757 INFO [Listener at localhost/38799] hbase.HBaseTestingUtility(772): Setting dfs.provided.aliasmap.inmemory.leveldb.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/29889217-1cb1-ea8b-5ca7-d011d0feb9b8/dfs.provided.aliasmap.inmemory.leveldb.dir in system properties and HBase conf 2023-07-19 05:15:12,757 INFO [Listener at localhost/38799] hbase.HBaseTestingUtility(772): Setting fs.s3a.committer.staging.tmp.path to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/29889217-1cb1-ea8b-5ca7-d011d0feb9b8/fs.s3a.committer.staging.tmp.path in system properties and HBase conf Formatting using clusterid: testClusterID 2023-07-19 05:15:12,761 WARN [Listener at localhost/38799] conf.Configuration(1701): No unit for dfs.heartbeat.interval(3) assuming SECONDS 2023-07-19 05:15:12,761 WARN [Listener at localhost/38799] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-07-19 05:15:12,803 DEBUG [Listener at localhost/38799-EventThread] zookeeper.ZKWatcher(600): VerifyingRSGroupAdminClient-0x1017c00e52c000a, quorum=127.0.0.1:54772, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Disconnected, path=null 2023-07-19 05:15:12,803 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(630): VerifyingRSGroupAdminClient-0x1017c00e52c000a, quorum=127.0.0.1:54772, baseZNode=/hbase Received Disconnected from ZooKeeper, ignoring 2023-07-19 05:15:12,812 WARN [Listener at localhost/38799] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-19 05:15:12,815 INFO [Listener at localhost/38799] log.Slf4jLog(67): jetty-6.1.26 2023-07-19 05:15:12,821 INFO [Listener at localhost/38799] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/hdfs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/29889217-1cb1-ea8b-5ca7-d011d0feb9b8/java.io.tmpdir/Jetty_localhost_36281_hdfs____hx1q1e/webapp 2023-07-19 05:15:12,952 INFO [Listener at localhost/38799] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:36281 2023-07-19 05:15:12,961 WARN [Listener at localhost/38799] conf.Configuration(1701): No unit for dfs.heartbeat.interval(3) assuming SECONDS 2023-07-19 05:15:12,962 WARN [Listener at localhost/38799] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-07-19 05:15:13,060 WARN [Listener at localhost/39859] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-19 05:15:13,077 WARN [Listener at localhost/39859] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-19 05:15:13,080 WARN [Listener at localhost/39859] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-19 05:15:13,081 INFO [Listener at localhost/39859] log.Slf4jLog(67): jetty-6.1.26 2023-07-19 05:15:13,086 INFO [Listener at localhost/39859] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/29889217-1cb1-ea8b-5ca7-d011d0feb9b8/java.io.tmpdir/Jetty_localhost_37435_datanode____.hylduv/webapp 2023-07-19 05:15:13,182 INFO [Listener at localhost/39859] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:37435 2023-07-19 05:15:13,189 WARN [Listener at localhost/36887] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-19 05:15:13,218 WARN [Listener at localhost/36887] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-19 05:15:13,222 WARN [Listener at localhost/36887] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-19 05:15:13,224 INFO [Listener at localhost/36887] log.Slf4jLog(67): jetty-6.1.26 2023-07-19 05:15:13,227 INFO [Listener at localhost/36887] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/29889217-1cb1-ea8b-5ca7-d011d0feb9b8/java.io.tmpdir/Jetty_localhost_41491_datanode____obn6dy/webapp 2023-07-19 05:15:13,346 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x9ff9c86e4b9d284d: Processing first storage report for DS-61e74ad5-bf73-4e56-81c0-988bafd59d61 from datanode a0ddb556-74cf-41dd-bf72-3efb3b220d99 2023-07-19 05:15:13,346 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x9ff9c86e4b9d284d: from storage DS-61e74ad5-bf73-4e56-81c0-988bafd59d61 node DatanodeRegistration(127.0.0.1:46229, datanodeUuid=a0ddb556-74cf-41dd-bf72-3efb3b220d99, infoPort=42857, infoSecurePort=0, ipcPort=36887, storageInfo=lv=-57;cid=testClusterID;nsid=1396573484;c=1689743712768), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-19 05:15:13,346 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x9ff9c86e4b9d284d: Processing first storage report for DS-569e4b8c-8908-4515-af3d-fc1770c23559 from datanode a0ddb556-74cf-41dd-bf72-3efb3b220d99 2023-07-19 05:15:13,347 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x9ff9c86e4b9d284d: from storage DS-569e4b8c-8908-4515-af3d-fc1770c23559 node DatanodeRegistration(127.0.0.1:46229, datanodeUuid=a0ddb556-74cf-41dd-bf72-3efb3b220d99, infoPort=42857, infoSecurePort=0, ipcPort=36887, storageInfo=lv=-57;cid=testClusterID;nsid=1396573484;c=1689743712768), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-19 05:15:13,365 INFO [Listener at localhost/36887] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:41491 2023-07-19 05:15:13,395 WARN [Listener at localhost/46837] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-19 05:15:13,416 WARN [Listener at localhost/46837] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-19 05:15:13,419 WARN [Listener at localhost/46837] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-19 05:15:13,421 INFO [Listener at localhost/46837] log.Slf4jLog(67): jetty-6.1.26 2023-07-19 05:15:13,426 INFO [Listener at localhost/46837] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/29889217-1cb1-ea8b-5ca7-d011d0feb9b8/java.io.tmpdir/Jetty_localhost_43785_datanode____5jkzg/webapp 2023-07-19 05:15:13,500 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xb4b444bc15b29fcb: Processing first storage report for DS-686316b1-d9d1-4e30-b758-fc057cfd6c78 from datanode 899c752e-603d-49ad-9bd2-3f0601e92e32 2023-07-19 05:15:13,501 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xb4b444bc15b29fcb: from storage DS-686316b1-d9d1-4e30-b758-fc057cfd6c78 node DatanodeRegistration(127.0.0.1:34165, datanodeUuid=899c752e-603d-49ad-9bd2-3f0601e92e32, infoPort=42001, infoSecurePort=0, ipcPort=46837, storageInfo=lv=-57;cid=testClusterID;nsid=1396573484;c=1689743712768), blocks: 0, hasStaleStorage: true, processing time: 1 msecs, invalidatedBlocks: 0 2023-07-19 05:15:13,501 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xb4b444bc15b29fcb: Processing first storage report for DS-f9a42e16-7f94-4eb7-82d9-fb17f47f740a from datanode 899c752e-603d-49ad-9bd2-3f0601e92e32 2023-07-19 05:15:13,501 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xb4b444bc15b29fcb: from storage DS-f9a42e16-7f94-4eb7-82d9-fb17f47f740a node DatanodeRegistration(127.0.0.1:34165, datanodeUuid=899c752e-603d-49ad-9bd2-3f0601e92e32, infoPort=42001, infoSecurePort=0, ipcPort=46837, storageInfo=lv=-57;cid=testClusterID;nsid=1396573484;c=1689743712768), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-19 05:15:13,550 INFO [Listener at localhost/46837] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:43785 2023-07-19 05:15:13,569 WARN [Listener at localhost/43345] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-19 05:15:13,714 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xb0ad0b6de0f6328: Processing first storage report for DS-d2fc99e7-d445-45de-b8e7-620647090d5d from datanode 8f23f68a-4b69-4c65-bdd5-e5fbc1398a93 2023-07-19 05:15:13,714 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xb0ad0b6de0f6328: from storage DS-d2fc99e7-d445-45de-b8e7-620647090d5d node DatanodeRegistration(127.0.0.1:44789, datanodeUuid=8f23f68a-4b69-4c65-bdd5-e5fbc1398a93, infoPort=39793, infoSecurePort=0, ipcPort=43345, storageInfo=lv=-57;cid=testClusterID;nsid=1396573484;c=1689743712768), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-19 05:15:13,714 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xb0ad0b6de0f6328: Processing first storage report for DS-9af3e931-0663-41fa-9c95-38787627fd5e from datanode 8f23f68a-4b69-4c65-bdd5-e5fbc1398a93 2023-07-19 05:15:13,714 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xb0ad0b6de0f6328: from storage DS-9af3e931-0663-41fa-9c95-38787627fd5e node DatanodeRegistration(127.0.0.1:44789, datanodeUuid=8f23f68a-4b69-4c65-bdd5-e5fbc1398a93, infoPort=39793, infoSecurePort=0, ipcPort=43345, storageInfo=lv=-57;cid=testClusterID;nsid=1396573484;c=1689743712768), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-19 05:15:13,723 DEBUG [Listener at localhost/43345] hbase.HBaseTestingUtility(649): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/29889217-1cb1-ea8b-5ca7-d011d0feb9b8 2023-07-19 05:15:13,725 INFO [Listener at localhost/43345] zookeeper.MiniZooKeeperCluster(258): Started connectionTimeout=30000, dir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/29889217-1cb1-ea8b-5ca7-d011d0feb9b8/cluster_a40a042b-8387-eb97-eb03-b118443e501d/zookeeper_0, clientPort=58776, secureClientPort=-1, dataDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/29889217-1cb1-ea8b-5ca7-d011d0feb9b8/cluster_a40a042b-8387-eb97-eb03-b118443e501d/zookeeper_0/version-2, dataDirSize=424 dataLogDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/29889217-1cb1-ea8b-5ca7-d011d0feb9b8/cluster_a40a042b-8387-eb97-eb03-b118443e501d/zookeeper_0/version-2, dataLogSize=424 tickTime=2000, maxClientCnxns=300, minSessionTimeout=4000, maxSessionTimeout=40000, serverId=0 2023-07-19 05:15:13,727 INFO [Listener at localhost/43345] zookeeper.MiniZooKeeperCluster(283): Started MiniZooKeeperCluster and ran 'stat' on client port=58776 2023-07-19 05:15:13,727 INFO [Listener at localhost/43345] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-19 05:15:13,728 INFO [Listener at localhost/43345] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-19 05:15:13,751 INFO [Listener at localhost/43345] util.FSUtils(471): Created version file at hdfs://localhost:39859/user/jenkins/test-data/f700d5b1-dd16-c927-b07f-be4667f3482a with version=8 2023-07-19 05:15:13,752 INFO [Listener at localhost/43345] hbase.HBaseTestingUtility(1408): The hbase.fs.tmp.dir is set to hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/hbase-staging 2023-07-19 05:15:13,753 DEBUG [Listener at localhost/43345] hbase.LocalHBaseCluster(134): Setting Master Port to random. 2023-07-19 05:15:13,753 DEBUG [Listener at localhost/43345] hbase.LocalHBaseCluster(141): Setting RegionServer Port to random. 2023-07-19 05:15:13,753 DEBUG [Listener at localhost/43345] hbase.LocalHBaseCluster(151): Setting RS InfoServer Port to random. 2023-07-19 05:15:13,753 DEBUG [Listener at localhost/43345] hbase.LocalHBaseCluster(159): Setting Master InfoServer Port to random. 2023-07-19 05:15:13,754 INFO [Listener at localhost/43345] client.ConnectionUtils(127): master/jenkins-hbase4:0 server-side Connection retries=45 2023-07-19 05:15:13,755 INFO [Listener at localhost/43345] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-19 05:15:13,755 INFO [Listener at localhost/43345] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-19 05:15:13,755 INFO [Listener at localhost/43345] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-19 05:15:13,755 INFO [Listener at localhost/43345] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-19 05:15:13,755 INFO [Listener at localhost/43345] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-19 05:15:13,755 INFO [Listener at localhost/43345] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.MasterService, hbase.pb.RegionServerStatusService, hbase.pb.LockService, hbase.pb.HbckService, hbase.pb.ClientMetaService, hbase.pb.ClientService, hbase.pb.AdminService 2023-07-19 05:15:13,756 INFO [Listener at localhost/43345] ipc.NettyRpcServer(120): Bind to /172.31.14.131:37145 2023-07-19 05:15:13,757 INFO [Listener at localhost/43345] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-19 05:15:13,758 INFO [Listener at localhost/43345] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-19 05:15:13,758 INFO [Listener at localhost/43345] zookeeper.RecoverableZooKeeper(93): Process identifier=master:37145 connecting to ZooKeeper ensemble=127.0.0.1:58776 2023-07-19 05:15:13,766 DEBUG [Listener at localhost/43345-EventThread] zookeeper.ZKWatcher(600): master:371450x0, quorum=127.0.0.1:58776, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-19 05:15:13,770 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): master:37145-0x1017c0168c00000 connected 2023-07-19 05:15:13,788 DEBUG [Listener at localhost/43345] zookeeper.ZKUtil(164): master:37145-0x1017c0168c00000, quorum=127.0.0.1:58776, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-19 05:15:13,788 DEBUG [Listener at localhost/43345] zookeeper.ZKUtil(164): master:37145-0x1017c0168c00000, quorum=127.0.0.1:58776, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-19 05:15:13,789 DEBUG [Listener at localhost/43345] zookeeper.ZKUtil(164): master:37145-0x1017c0168c00000, quorum=127.0.0.1:58776, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-19 05:15:13,791 DEBUG [Listener at localhost/43345] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=37145 2023-07-19 05:15:13,794 DEBUG [Listener at localhost/43345] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=37145 2023-07-19 05:15:13,794 DEBUG [Listener at localhost/43345] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=37145 2023-07-19 05:15:13,795 DEBUG [Listener at localhost/43345] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=37145 2023-07-19 05:15:13,795 DEBUG [Listener at localhost/43345] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=37145 2023-07-19 05:15:13,797 INFO [Listener at localhost/43345] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-19 05:15:13,797 INFO [Listener at localhost/43345] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-19 05:15:13,798 INFO [Listener at localhost/43345] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-19 05:15:13,798 INFO [Listener at localhost/43345] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context master 2023-07-19 05:15:13,798 INFO [Listener at localhost/43345] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-19 05:15:13,799 INFO [Listener at localhost/43345] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-19 05:15:13,799 INFO [Listener at localhost/43345] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-19 05:15:13,799 INFO [Listener at localhost/43345] http.HttpServer(1146): Jetty bound to port 35463 2023-07-19 05:15:13,800 INFO [Listener at localhost/43345] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-19 05:15:13,805 INFO [Listener at localhost/43345] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-19 05:15:13,806 INFO [Listener at localhost/43345] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@6e960f4c{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/29889217-1cb1-ea8b-5ca7-d011d0feb9b8/hadoop.log.dir/,AVAILABLE} 2023-07-19 05:15:13,806 INFO [Listener at localhost/43345] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-19 05:15:13,806 INFO [Listener at localhost/43345] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@5f546421{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-19 05:15:13,922 INFO [Listener at localhost/43345] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-19 05:15:13,924 INFO [Listener at localhost/43345] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-19 05:15:13,924 INFO [Listener at localhost/43345] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-19 05:15:13,924 INFO [Listener at localhost/43345] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-19 05:15:13,926 INFO [Listener at localhost/43345] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-19 05:15:13,927 INFO [Listener at localhost/43345] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@38a9d218{master,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/29889217-1cb1-ea8b-5ca7-d011d0feb9b8/java.io.tmpdir/jetty-0_0_0_0-35463-hbase-server-2_4_18-SNAPSHOT_jar-_-any-5577481638320146246/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/master} 2023-07-19 05:15:13,929 INFO [Listener at localhost/43345] server.AbstractConnector(333): Started ServerConnector@78e67007{HTTP/1.1, (http/1.1)}{0.0.0.0:35463} 2023-07-19 05:15:13,929 INFO [Listener at localhost/43345] server.Server(415): Started @38357ms 2023-07-19 05:15:13,929 INFO [Listener at localhost/43345] master.HMaster(444): hbase.rootdir=hdfs://localhost:39859/user/jenkins/test-data/f700d5b1-dd16-c927-b07f-be4667f3482a, hbase.cluster.distributed=false 2023-07-19 05:15:13,946 INFO [Listener at localhost/43345] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-19 05:15:13,946 INFO [Listener at localhost/43345] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-19 05:15:13,946 INFO [Listener at localhost/43345] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-19 05:15:13,946 INFO [Listener at localhost/43345] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-19 05:15:13,947 INFO [Listener at localhost/43345] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-19 05:15:13,947 INFO [Listener at localhost/43345] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-19 05:15:13,947 INFO [Listener at localhost/43345] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-19 05:15:13,947 INFO [Listener at localhost/43345] ipc.NettyRpcServer(120): Bind to /172.31.14.131:38277 2023-07-19 05:15:13,948 INFO [Listener at localhost/43345] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-19 05:15:13,950 DEBUG [Listener at localhost/43345] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-19 05:15:13,951 INFO [Listener at localhost/43345] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-19 05:15:13,952 INFO [Listener at localhost/43345] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-19 05:15:13,953 INFO [Listener at localhost/43345] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:38277 connecting to ZooKeeper ensemble=127.0.0.1:58776 2023-07-19 05:15:13,957 DEBUG [Listener at localhost/43345-EventThread] zookeeper.ZKWatcher(600): regionserver:382770x0, quorum=127.0.0.1:58776, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-19 05:15:13,959 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:38277-0x1017c0168c00001 connected 2023-07-19 05:15:13,959 DEBUG [Listener at localhost/43345] zookeeper.ZKUtil(164): regionserver:38277-0x1017c0168c00001, quorum=127.0.0.1:58776, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-19 05:15:13,959 DEBUG [Listener at localhost/43345] zookeeper.ZKUtil(164): regionserver:38277-0x1017c0168c00001, quorum=127.0.0.1:58776, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-19 05:15:13,960 DEBUG [Listener at localhost/43345] zookeeper.ZKUtil(164): regionserver:38277-0x1017c0168c00001, quorum=127.0.0.1:58776, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-19 05:15:13,964 DEBUG [Listener at localhost/43345] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=38277 2023-07-19 05:15:13,964 DEBUG [Listener at localhost/43345] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=38277 2023-07-19 05:15:13,965 DEBUG [Listener at localhost/43345] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=38277 2023-07-19 05:15:13,966 DEBUG [Listener at localhost/43345] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=38277 2023-07-19 05:15:13,966 DEBUG [Listener at localhost/43345] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=38277 2023-07-19 05:15:13,969 INFO [Listener at localhost/43345] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-19 05:15:13,969 INFO [Listener at localhost/43345] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-19 05:15:13,970 INFO [Listener at localhost/43345] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-19 05:15:13,970 INFO [Listener at localhost/43345] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-19 05:15:13,971 INFO [Listener at localhost/43345] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-19 05:15:13,971 INFO [Listener at localhost/43345] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-19 05:15:13,971 INFO [Listener at localhost/43345] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-19 05:15:13,972 INFO [Listener at localhost/43345] http.HttpServer(1146): Jetty bound to port 42711 2023-07-19 05:15:13,973 INFO [Listener at localhost/43345] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-19 05:15:13,975 INFO [Listener at localhost/43345] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-19 05:15:13,975 INFO [Listener at localhost/43345] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@ea82fff{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/29889217-1cb1-ea8b-5ca7-d011d0feb9b8/hadoop.log.dir/,AVAILABLE} 2023-07-19 05:15:13,975 INFO [Listener at localhost/43345] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-19 05:15:13,976 INFO [Listener at localhost/43345] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@2d8a119b{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-19 05:15:14,091 INFO [Listener at localhost/43345] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-19 05:15:14,092 INFO [Listener at localhost/43345] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-19 05:15:14,092 INFO [Listener at localhost/43345] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-19 05:15:14,092 INFO [Listener at localhost/43345] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-19 05:15:14,093 INFO [Listener at localhost/43345] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-19 05:15:14,094 INFO [Listener at localhost/43345] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@27061e18{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/29889217-1cb1-ea8b-5ca7-d011d0feb9b8/java.io.tmpdir/jetty-0_0_0_0-42711-hbase-server-2_4_18-SNAPSHOT_jar-_-any-3033983606941931483/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-19 05:15:14,096 INFO [Listener at localhost/43345] server.AbstractConnector(333): Started ServerConnector@619d9d61{HTTP/1.1, (http/1.1)}{0.0.0.0:42711} 2023-07-19 05:15:14,096 INFO [Listener at localhost/43345] server.Server(415): Started @38524ms 2023-07-19 05:15:14,109 INFO [Listener at localhost/43345] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-19 05:15:14,110 INFO [Listener at localhost/43345] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-19 05:15:14,110 INFO [Listener at localhost/43345] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-19 05:15:14,110 INFO [Listener at localhost/43345] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-19 05:15:14,110 INFO [Listener at localhost/43345] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-19 05:15:14,110 INFO [Listener at localhost/43345] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-19 05:15:14,110 INFO [Listener at localhost/43345] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-19 05:15:14,111 INFO [Listener at localhost/43345] ipc.NettyRpcServer(120): Bind to /172.31.14.131:37135 2023-07-19 05:15:14,112 INFO [Listener at localhost/43345] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-19 05:15:14,114 DEBUG [Listener at localhost/43345] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-19 05:15:14,114 INFO [Listener at localhost/43345] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-19 05:15:14,115 INFO [Listener at localhost/43345] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-19 05:15:14,116 INFO [Listener at localhost/43345] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:37135 connecting to ZooKeeper ensemble=127.0.0.1:58776 2023-07-19 05:15:14,120 DEBUG [Listener at localhost/43345-EventThread] zookeeper.ZKWatcher(600): regionserver:371350x0, quorum=127.0.0.1:58776, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-19 05:15:14,122 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:37135-0x1017c0168c00002 connected 2023-07-19 05:15:14,122 DEBUG [Listener at localhost/43345] zookeeper.ZKUtil(164): regionserver:37135-0x1017c0168c00002, quorum=127.0.0.1:58776, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-19 05:15:14,122 DEBUG [Listener at localhost/43345] zookeeper.ZKUtil(164): regionserver:37135-0x1017c0168c00002, quorum=127.0.0.1:58776, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-19 05:15:14,123 DEBUG [Listener at localhost/43345] zookeeper.ZKUtil(164): regionserver:37135-0x1017c0168c00002, quorum=127.0.0.1:58776, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-19 05:15:14,123 DEBUG [Listener at localhost/43345] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=37135 2023-07-19 05:15:14,124 DEBUG [Listener at localhost/43345] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=37135 2023-07-19 05:15:14,127 DEBUG [Listener at localhost/43345] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=37135 2023-07-19 05:15:14,128 DEBUG [Listener at localhost/43345] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=37135 2023-07-19 05:15:14,128 DEBUG [Listener at localhost/43345] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=37135 2023-07-19 05:15:14,131 INFO [Listener at localhost/43345] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-19 05:15:14,131 INFO [Listener at localhost/43345] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-19 05:15:14,131 INFO [Listener at localhost/43345] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-19 05:15:14,131 INFO [Listener at localhost/43345] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-19 05:15:14,131 INFO [Listener at localhost/43345] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-19 05:15:14,131 INFO [Listener at localhost/43345] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-19 05:15:14,132 INFO [Listener at localhost/43345] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-19 05:15:14,132 INFO [Listener at localhost/43345] http.HttpServer(1146): Jetty bound to port 43991 2023-07-19 05:15:14,132 INFO [Listener at localhost/43345] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-19 05:15:14,135 INFO [Listener at localhost/43345] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-19 05:15:14,136 INFO [Listener at localhost/43345] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@35eb1866{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/29889217-1cb1-ea8b-5ca7-d011d0feb9b8/hadoop.log.dir/,AVAILABLE} 2023-07-19 05:15:14,136 INFO [Listener at localhost/43345] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-19 05:15:14,136 INFO [Listener at localhost/43345] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@3de75ac7{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-19 05:15:14,249 INFO [Listener at localhost/43345] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-19 05:15:14,250 INFO [Listener at localhost/43345] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-19 05:15:14,250 INFO [Listener at localhost/43345] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-19 05:15:14,250 INFO [Listener at localhost/43345] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-19 05:15:14,251 INFO [Listener at localhost/43345] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-19 05:15:14,252 INFO [Listener at localhost/43345] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@4c4e4298{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/29889217-1cb1-ea8b-5ca7-d011d0feb9b8/java.io.tmpdir/jetty-0_0_0_0-43991-hbase-server-2_4_18-SNAPSHOT_jar-_-any-4413182773516951121/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-19 05:15:14,253 INFO [Listener at localhost/43345] server.AbstractConnector(333): Started ServerConnector@2f107ff7{HTTP/1.1, (http/1.1)}{0.0.0.0:43991} 2023-07-19 05:15:14,253 INFO [Listener at localhost/43345] server.Server(415): Started @38681ms 2023-07-19 05:15:14,265 INFO [Listener at localhost/43345] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-19 05:15:14,265 INFO [Listener at localhost/43345] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-19 05:15:14,265 INFO [Listener at localhost/43345] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-19 05:15:14,265 INFO [Listener at localhost/43345] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-19 05:15:14,265 INFO [Listener at localhost/43345] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-19 05:15:14,265 INFO [Listener at localhost/43345] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-19 05:15:14,266 INFO [Listener at localhost/43345] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-19 05:15:14,266 INFO [Listener at localhost/43345] ipc.NettyRpcServer(120): Bind to /172.31.14.131:45187 2023-07-19 05:15:14,267 INFO [Listener at localhost/43345] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-19 05:15:14,268 DEBUG [Listener at localhost/43345] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-19 05:15:14,269 INFO [Listener at localhost/43345] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-19 05:15:14,270 INFO [Listener at localhost/43345] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-19 05:15:14,271 INFO [Listener at localhost/43345] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:45187 connecting to ZooKeeper ensemble=127.0.0.1:58776 2023-07-19 05:15:14,275 DEBUG [Listener at localhost/43345-EventThread] zookeeper.ZKWatcher(600): regionserver:451870x0, quorum=127.0.0.1:58776, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-19 05:15:14,276 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:45187-0x1017c0168c00003 connected 2023-07-19 05:15:14,276 DEBUG [Listener at localhost/43345] zookeeper.ZKUtil(164): regionserver:45187-0x1017c0168c00003, quorum=127.0.0.1:58776, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-19 05:15:14,277 DEBUG [Listener at localhost/43345] zookeeper.ZKUtil(164): regionserver:45187-0x1017c0168c00003, quorum=127.0.0.1:58776, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-19 05:15:14,277 DEBUG [Listener at localhost/43345] zookeeper.ZKUtil(164): regionserver:45187-0x1017c0168c00003, quorum=127.0.0.1:58776, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-19 05:15:14,278 DEBUG [Listener at localhost/43345] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=45187 2023-07-19 05:15:14,278 DEBUG [Listener at localhost/43345] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=45187 2023-07-19 05:15:14,278 DEBUG [Listener at localhost/43345] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=45187 2023-07-19 05:15:14,278 DEBUG [Listener at localhost/43345] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=45187 2023-07-19 05:15:14,280 DEBUG [Listener at localhost/43345] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=45187 2023-07-19 05:15:14,282 INFO [Listener at localhost/43345] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-19 05:15:14,282 INFO [Listener at localhost/43345] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-19 05:15:14,283 INFO [Listener at localhost/43345] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-19 05:15:14,283 INFO [Listener at localhost/43345] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-19 05:15:14,283 INFO [Listener at localhost/43345] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-19 05:15:14,283 INFO [Listener at localhost/43345] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-19 05:15:14,283 INFO [Listener at localhost/43345] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-19 05:15:14,284 INFO [Listener at localhost/43345] http.HttpServer(1146): Jetty bound to port 39329 2023-07-19 05:15:14,284 INFO [Listener at localhost/43345] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-19 05:15:14,285 INFO [Listener at localhost/43345] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-19 05:15:14,286 INFO [Listener at localhost/43345] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@53ecbb85{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/29889217-1cb1-ea8b-5ca7-d011d0feb9b8/hadoop.log.dir/,AVAILABLE} 2023-07-19 05:15:14,286 INFO [Listener at localhost/43345] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-19 05:15:14,286 INFO [Listener at localhost/43345] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@7fb867a7{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-19 05:15:14,407 INFO [Listener at localhost/43345] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-19 05:15:14,408 INFO [Listener at localhost/43345] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-19 05:15:14,409 INFO [Listener at localhost/43345] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-19 05:15:14,409 INFO [Listener at localhost/43345] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-19 05:15:14,410 INFO [Listener at localhost/43345] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-19 05:15:14,411 INFO [Listener at localhost/43345] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@64b278aa{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/29889217-1cb1-ea8b-5ca7-d011d0feb9b8/java.io.tmpdir/jetty-0_0_0_0-39329-hbase-server-2_4_18-SNAPSHOT_jar-_-any-3466212198277709682/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-19 05:15:14,413 INFO [Listener at localhost/43345] server.AbstractConnector(333): Started ServerConnector@7596ce29{HTTP/1.1, (http/1.1)}{0.0.0.0:39329} 2023-07-19 05:15:14,413 INFO [Listener at localhost/43345] server.Server(415): Started @38841ms 2023-07-19 05:15:14,416 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-19 05:15:14,423 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.AbstractConnector(333): Started ServerConnector@7c7e7546{HTTP/1.1, (http/1.1)}{0.0.0.0:42533} 2023-07-19 05:15:14,423 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.Server(415): Started @38851ms 2023-07-19 05:15:14,423 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2168): Adding backup master ZNode /hbase/backup-masters/jenkins-hbase4.apache.org,37145,1689743713754 2023-07-19 05:15:14,425 DEBUG [Listener at localhost/43345-EventThread] zookeeper.ZKWatcher(600): master:37145-0x1017c0168c00000, quorum=127.0.0.1:58776, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-07-19 05:15:14,426 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:37145-0x1017c0168c00000, quorum=127.0.0.1:58776, baseZNode=/hbase Set watcher on existing znode=/hbase/backup-masters/jenkins-hbase4.apache.org,37145,1689743713754 2023-07-19 05:15:14,427 DEBUG [Listener at localhost/43345-EventThread] zookeeper.ZKWatcher(600): master:37145-0x1017c0168c00000, quorum=127.0.0.1:58776, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-19 05:15:14,427 DEBUG [Listener at localhost/43345-EventThread] zookeeper.ZKWatcher(600): regionserver:38277-0x1017c0168c00001, quorum=127.0.0.1:58776, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-19 05:15:14,427 DEBUG [Listener at localhost/43345-EventThread] zookeeper.ZKWatcher(600): master:37145-0x1017c0168c00000, quorum=127.0.0.1:58776, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-19 05:15:14,428 DEBUG [Listener at localhost/43345-EventThread] zookeeper.ZKWatcher(600): regionserver:37135-0x1017c0168c00002, quorum=127.0.0.1:58776, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-19 05:15:14,428 DEBUG [Listener at localhost/43345-EventThread] zookeeper.ZKWatcher(600): regionserver:45187-0x1017c0168c00003, quorum=127.0.0.1:58776, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-19 05:15:14,429 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:37145-0x1017c0168c00000, quorum=127.0.0.1:58776, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-19 05:15:14,431 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): master:37145-0x1017c0168c00000, quorum=127.0.0.1:58776, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-19 05:15:14,431 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(227): Deleting ZNode for /hbase/backup-masters/jenkins-hbase4.apache.org,37145,1689743713754 from backup master directory 2023-07-19 05:15:14,432 DEBUG [Listener at localhost/43345-EventThread] zookeeper.ZKWatcher(600): master:37145-0x1017c0168c00000, quorum=127.0.0.1:58776, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/backup-masters/jenkins-hbase4.apache.org,37145,1689743713754 2023-07-19 05:15:14,432 DEBUG [Listener at localhost/43345-EventThread] zookeeper.ZKWatcher(600): master:37145-0x1017c0168c00000, quorum=127.0.0.1:58776, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-07-19 05:15:14,432 WARN [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-19 05:15:14,432 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(237): Registered as active master=jenkins-hbase4.apache.org,37145,1689743713754 2023-07-19 05:15:14,470 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] util.FSUtils(620): Created cluster ID file at hdfs://localhost:39859/user/jenkins/test-data/f700d5b1-dd16-c927-b07f-be4667f3482a/hbase.id with ID: db6d3be0-a031-417e-9d42-cc0747cd96de 2023-07-19 05:15:14,490 INFO [master/jenkins-hbase4:0:becomeActiveMaster] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-19 05:15:14,494 DEBUG [Listener at localhost/43345-EventThread] zookeeper.ZKWatcher(600): master:37145-0x1017c0168c00000, quorum=127.0.0.1:58776, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-19 05:15:14,511 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ReadOnlyZKClient(139): Connect 0x7a7ac115 to 127.0.0.1:58776 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-19 05:15:14,517 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@405f880c, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-19 05:15:14,517 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(309): Create or load local region for table 'master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-19 05:15:14,518 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(132): Injected flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000 2023-07-19 05:15:14,518 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-19 05:15:14,520 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7693): Creating {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, under table dir hdfs://localhost:39859/user/jenkins/test-data/f700d5b1-dd16-c927-b07f-be4667f3482a/MasterData/data/master/store-tmp 2023-07-19 05:15:14,535 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-19 05:15:14,535 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-07-19 05:15:14,535 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-19 05:15:14,535 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-19 05:15:14,535 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-07-19 05:15:14,535 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-19 05:15:14,535 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-19 05:15:14,535 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-19 05:15:14,536 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(191): WALDir=hdfs://localhost:39859/user/jenkins/test-data/f700d5b1-dd16-c927-b07f-be4667f3482a/MasterData/WALs/jenkins-hbase4.apache.org,37145,1689743713754 2023-07-19 05:15:14,539 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C37145%2C1689743713754, suffix=, logDir=hdfs://localhost:39859/user/jenkins/test-data/f700d5b1-dd16-c927-b07f-be4667f3482a/MasterData/WALs/jenkins-hbase4.apache.org,37145,1689743713754, archiveDir=hdfs://localhost:39859/user/jenkins/test-data/f700d5b1-dd16-c927-b07f-be4667f3482a/MasterData/oldWALs, maxLogs=10 2023-07-19 05:15:14,555 DEBUG [RS-EventLoopGroup-11-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:34165,DS-686316b1-d9d1-4e30-b758-fc057cfd6c78,DISK] 2023-07-19 05:15:14,560 DEBUG [RS-EventLoopGroup-11-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:44789,DS-d2fc99e7-d445-45de-b8e7-620647090d5d,DISK] 2023-07-19 05:15:14,560 DEBUG [RS-EventLoopGroup-11-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:46229,DS-61e74ad5-bf73-4e56-81c0-988bafd59d61,DISK] 2023-07-19 05:15:14,564 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/f700d5b1-dd16-c927-b07f-be4667f3482a/MasterData/WALs/jenkins-hbase4.apache.org,37145,1689743713754/jenkins-hbase4.apache.org%2C37145%2C1689743713754.1689743714539 2023-07-19 05:15:14,564 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:34165,DS-686316b1-d9d1-4e30-b758-fc057cfd6c78,DISK], DatanodeInfoWithStorage[127.0.0.1:46229,DS-61e74ad5-bf73-4e56-81c0-988bafd59d61,DISK], DatanodeInfoWithStorage[127.0.0.1:44789,DS-d2fc99e7-d445-45de-b8e7-620647090d5d,DISK]] 2023-07-19 05:15:14,564 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7854): Opening region: {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''} 2023-07-19 05:15:14,564 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-19 05:15:14,564 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7894): checking encryption for 1595e783b53d99cd5eef43b6debb2682 2023-07-19 05:15:14,564 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7897): checking classloading for 1595e783b53d99cd5eef43b6debb2682 2023-07-19 05:15:14,567 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family proc of region 1595e783b53d99cd5eef43b6debb2682 2023-07-19 05:15:14,568 DEBUG [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:39859/user/jenkins/test-data/f700d5b1-dd16-c927-b07f-be4667f3482a/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc 2023-07-19 05:15:14,569 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1595e783b53d99cd5eef43b6debb2682 columnFamilyName proc 2023-07-19 05:15:14,570 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(310): Store=1595e783b53d99cd5eef43b6debb2682/proc, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-19 05:15:14,570 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:39859/user/jenkins/test-data/f700d5b1-dd16-c927-b07f-be4667f3482a/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-07-19 05:15:14,571 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:39859/user/jenkins/test-data/f700d5b1-dd16-c927-b07f-be4667f3482a/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-07-19 05:15:14,574 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1055): writing seq id for 1595e783b53d99cd5eef43b6debb2682 2023-07-19 05:15:14,576 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:39859/user/jenkins/test-data/f700d5b1-dd16-c927-b07f-be4667f3482a/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-19 05:15:14,577 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1072): Opened 1595e783b53d99cd5eef43b6debb2682; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11999811840, jitterRate=0.11756956577301025}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-19 05:15:14,577 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(965): Region open journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-19 05:15:14,577 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(122): Constructor flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000, compactMin=4 2023-07-19 05:15:14,579 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(104): Starting the Region Procedure Store, number threads=5 2023-07-19 05:15:14,579 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(562): Starting 5 core workers (bigger of cpus/4 or 16) with max (burst) worker count=50 2023-07-19 05:15:14,579 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(255): Starting Region Procedure Store lease recovery... 2023-07-19 05:15:14,579 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(582): Recovered RegionProcedureStore lease in 0 msec 2023-07-19 05:15:14,580 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(596): Loaded RegionProcedureStore in 0 msec 2023-07-19 05:15:14,580 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.RemoteProcedureDispatcher(96): Instantiated, coreThreads=3 (allowCoreThreadTimeOut=true), queueMaxSize=32, operationDelay=150 2023-07-19 05:15:14,582 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(253): hbase:meta replica znodes: [] 2023-07-19 05:15:14,583 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.RegionServerTracker(124): Starting RegionServerTracker; 0 have existing ServerCrashProcedures, 0 possibly 'live' servers, and 0 'splitting'. 2023-07-19 05:15:14,584 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:37145-0x1017c0168c00000, quorum=127.0.0.1:58776, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/balancer 2023-07-19 05:15:14,584 INFO [master/jenkins-hbase4:0:becomeActiveMaster] normalizer.RegionNormalizerWorker(118): Normalizer rate limit set to unlimited 2023-07-19 05:15:14,585 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:37145-0x1017c0168c00000, quorum=127.0.0.1:58776, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/normalizer 2023-07-19 05:15:14,587 DEBUG [Listener at localhost/43345-EventThread] zookeeper.ZKWatcher(600): master:37145-0x1017c0168c00000, quorum=127.0.0.1:58776, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-19 05:15:14,587 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:37145-0x1017c0168c00000, quorum=127.0.0.1:58776, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/split 2023-07-19 05:15:14,587 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:37145-0x1017c0168c00000, quorum=127.0.0.1:58776, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/merge 2023-07-19 05:15:14,588 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:37145-0x1017c0168c00000, quorum=127.0.0.1:58776, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/snapshot-cleanup 2023-07-19 05:15:14,589 DEBUG [Listener at localhost/43345-EventThread] zookeeper.ZKWatcher(600): regionserver:45187-0x1017c0168c00003, quorum=127.0.0.1:58776, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-19 05:15:14,589 DEBUG [Listener at localhost/43345-EventThread] zookeeper.ZKWatcher(600): regionserver:38277-0x1017c0168c00001, quorum=127.0.0.1:58776, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-19 05:15:14,589 DEBUG [Listener at localhost/43345-EventThread] zookeeper.ZKWatcher(600): master:37145-0x1017c0168c00000, quorum=127.0.0.1:58776, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-19 05:15:14,589 DEBUG [Listener at localhost/43345-EventThread] zookeeper.ZKWatcher(600): regionserver:37135-0x1017c0168c00002, quorum=127.0.0.1:58776, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-19 05:15:14,590 DEBUG [Listener at localhost/43345-EventThread] zookeeper.ZKWatcher(600): master:37145-0x1017c0168c00000, quorum=127.0.0.1:58776, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-19 05:15:14,590 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(744): Active/primary master=jenkins-hbase4.apache.org,37145,1689743713754, sessionid=0x1017c0168c00000, setting cluster-up flag (Was=false) 2023-07-19 05:15:14,596 DEBUG [Listener at localhost/43345-EventThread] zookeeper.ZKWatcher(600): master:37145-0x1017c0168c00000, quorum=127.0.0.1:58776, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-19 05:15:14,600 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/flush-table-proc/acquired, /hbase/flush-table-proc/reached, /hbase/flush-table-proc/abort 2023-07-19 05:15:14,601 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,37145,1689743713754 2023-07-19 05:15:14,613 DEBUG [Listener at localhost/43345-EventThread] zookeeper.ZKWatcher(600): master:37145-0x1017c0168c00000, quorum=127.0.0.1:58776, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-19 05:15:14,618 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/online-snapshot/acquired, /hbase/online-snapshot/reached, /hbase/online-snapshot/abort 2023-07-19 05:15:14,619 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,37145,1689743713754 2023-07-19 05:15:14,620 WARN [master/jenkins-hbase4:0:becomeActiveMaster] snapshot.SnapshotManager(302): Couldn't delete working snapshot directory: hdfs://localhost:39859/user/jenkins/test-data/f700d5b1-dd16-c927-b07f-be4667f3482a/.hbase-snapshot/.tmp 2023-07-19 05:15:14,627 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2930): Registered master coprocessor service: service=RSGroupAdminService 2023-07-19 05:15:14,628 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] rsgroup.RSGroupInfoManagerImpl(537): Refreshing in Offline mode. 2023-07-19 05:15:14,628 INFO [master/jenkins-hbase4:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint loaded, priority=536870911. 2023-07-19 05:15:14,629 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,37145,1689743713754] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-19 05:15:14,629 INFO [master/jenkins-hbase4:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver loaded, priority=536870912. 2023-07-19 05:15:14,630 INFO [master/jenkins-hbase4:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.quotas.MasterQuotasObserver loaded, priority=536870913. 2023-07-19 05:15:14,631 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT; InitMetaProcedure table=hbase:meta 2023-07-19 05:15:14,642 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-07-19 05:15:14,642 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-07-19 05:15:14,643 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-07-19 05:15:14,643 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-07-19 05:15:14,643 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_OPEN_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-19 05:15:14,643 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_CLOSE_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-19 05:15:14,643 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-19 05:15:14,643 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_META_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-19 05:15:14,643 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=M_LOG_REPLAY_OPS-master/jenkins-hbase4:0, corePoolSize=10, maxPoolSize=10 2023-07-19 05:15:14,643 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-19 05:15:14,643 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_MERGE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-19 05:15:14,643 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_TABLE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-19 05:15:14,645 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner; timeout=30000, timestamp=1689743744645 2023-07-19 05:15:14,646 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): log_cleaner Cleaner pool size is 1 2023-07-19 05:15:14,646 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveLogCleaner 2023-07-19 05:15:14,646 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.replication.master.ReplicationLogCleaner 2023-07-19 05:15:14,646 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreWALCleaner 2023-07-19 05:15:14,646 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT, locked=true; InitMetaProcedure table=hbase:meta 2023-07-19 05:15:14,646 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveProcedureWALCleaner 2023-07-19 05:15:14,646 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.LogCleaner(148): Creating 1 old WALs cleaner threads 2023-07-19 05:15:14,646 INFO [PEWorker-1] procedure.InitMetaProcedure(71): BOOTSTRAP: creating hbase:meta region 2023-07-19 05:15:14,646 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=LogsCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-07-19 05:15:14,647 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): hfile_cleaner Cleaner pool size is 2 2023-07-19 05:15:14,647 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreHFileCleaner 2023-07-19 05:15:14,647 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.HFileLinkCleaner 2023-07-19 05:15:14,648 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.snapshot.SnapshotHFileCleaner 2023-07-19 05:15:14,648 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveHFileCleaner 2023-07-19 05:15:14,648 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(242): Starting for large file=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1689743714648,5,FailOnTimeoutGroup] 2023-07-19 05:15:14,648 INFO [PEWorker-1] util.FSTableDescriptors(128): Creating new hbase:meta table descriptor 'hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-07-19 05:15:14,648 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(257): Starting for small files=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1689743714648,5,FailOnTimeoutGroup] 2023-07-19 05:15:14,648 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HFileCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-07-19 05:15:14,648 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1461): Reopening regions with very high storeFileRefCount is disabled. Provide threshold value > 0 for hbase.regions.recovery.store.file.ref.count to enable it. 2023-07-19 05:15:14,648 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=ReplicationBarrierCleaner, period=43200000, unit=MILLISECONDS is enabled. 2023-07-19 05:15:14,648 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=SnapshotCleaner, period=1800000, unit=MILLISECONDS is enabled. 2023-07-19 05:15:14,667 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost:39859/user/jenkins/test-data/f700d5b1-dd16-c927-b07f-be4667f3482a/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-07-19 05:15:14,667 INFO [PEWorker-1] util.FSTableDescriptors(135): Updated hbase:meta table descriptor to hdfs://localhost:39859/user/jenkins/test-data/f700d5b1-dd16-c927-b07f-be4667f3482a/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-07-19 05:15:14,668 INFO [PEWorker-1] regionserver.HRegion(7675): creating {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:39859/user/jenkins/test-data/f700d5b1-dd16-c927-b07f-be4667f3482a 2023-07-19 05:15:14,684 DEBUG [PEWorker-1] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-19 05:15:14,686 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-19 05:15:14,694 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:39859/user/jenkins/test-data/f700d5b1-dd16-c927-b07f-be4667f3482a/data/hbase/meta/1588230740/info 2023-07-19 05:15:14,695 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-19 05:15:14,696 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-19 05:15:14,696 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-19 05:15:14,697 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:39859/user/jenkins/test-data/f700d5b1-dd16-c927-b07f-be4667f3482a/data/hbase/meta/1588230740/rep_barrier 2023-07-19 05:15:14,698 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-19 05:15:14,699 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-19 05:15:14,699 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-19 05:15:14,700 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:39859/user/jenkins/test-data/f700d5b1-dd16-c927-b07f-be4667f3482a/data/hbase/meta/1588230740/table 2023-07-19 05:15:14,701 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-19 05:15:14,701 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-19 05:15:14,702 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:39859/user/jenkins/test-data/f700d5b1-dd16-c927-b07f-be4667f3482a/data/hbase/meta/1588230740 2023-07-19 05:15:14,702 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:39859/user/jenkins/test-data/f700d5b1-dd16-c927-b07f-be4667f3482a/data/hbase/meta/1588230740 2023-07-19 05:15:14,704 DEBUG [PEWorker-1] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-19 05:15:14,706 DEBUG [PEWorker-1] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-19 05:15:14,713 DEBUG [PEWorker-1] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:39859/user/jenkins/test-data/f700d5b1-dd16-c927-b07f-be4667f3482a/data/hbase/meta/1588230740/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-19 05:15:14,715 INFO [PEWorker-1] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10750666400, jitterRate=0.0012338310480117798}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-19 05:15:14,716 DEBUG [PEWorker-1] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-19 05:15:14,716 DEBUG [PEWorker-1] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-19 05:15:14,716 INFO [PEWorker-1] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-19 05:15:14,716 DEBUG [PEWorker-1] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-19 05:15:14,716 DEBUG [PEWorker-1] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-19 05:15:14,716 DEBUG [PEWorker-1] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-19 05:15:14,716 INFO [RS:0;jenkins-hbase4:38277] regionserver.HRegionServer(951): ClusterId : db6d3be0-a031-417e-9d42-cc0747cd96de 2023-07-19 05:15:14,717 INFO [RS:1;jenkins-hbase4:37135] regionserver.HRegionServer(951): ClusterId : db6d3be0-a031-417e-9d42-cc0747cd96de 2023-07-19 05:15:14,717 INFO [RS:2;jenkins-hbase4:45187] regionserver.HRegionServer(951): ClusterId : db6d3be0-a031-417e-9d42-cc0747cd96de 2023-07-19 05:15:14,718 INFO [PEWorker-1] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-19 05:15:14,719 DEBUG [RS:0;jenkins-hbase4:38277] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-19 05:15:14,719 DEBUG [RS:1;jenkins-hbase4:37135] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-19 05:15:14,719 DEBUG [PEWorker-1] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-19 05:15:14,720 DEBUG [RS:2;jenkins-hbase4:45187] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-19 05:15:14,721 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_ASSIGN_META, locked=true; InitMetaProcedure table=hbase:meta 2023-07-19 05:15:14,721 INFO [PEWorker-1] procedure.InitMetaProcedure(103): Going to assign meta 2023-07-19 05:15:14,721 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN}] 2023-07-19 05:15:14,723 DEBUG [RS:0;jenkins-hbase4:38277] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-19 05:15:14,723 DEBUG [RS:0;jenkins-hbase4:38277] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-19 05:15:14,724 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN 2023-07-19 05:15:14,724 DEBUG [RS:2;jenkins-hbase4:45187] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-19 05:15:14,725 DEBUG [RS:2;jenkins-hbase4:45187] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-19 05:15:14,730 DEBUG [RS:1;jenkins-hbase4:37135] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-19 05:15:14,730 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN; state=OFFLINE, location=null; forceNewPlan=false, retain=false 2023-07-19 05:15:14,731 DEBUG [RS:1;jenkins-hbase4:37135] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-19 05:15:14,732 DEBUG [RS:0;jenkins-hbase4:38277] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-19 05:15:14,734 DEBUG [RS:0;jenkins-hbase4:38277] zookeeper.ReadOnlyZKClient(139): Connect 0x6054239f to 127.0.0.1:58776 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-19 05:15:14,736 DEBUG [RS:2;jenkins-hbase4:45187] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-19 05:15:14,738 DEBUG [RS:2;jenkins-hbase4:45187] zookeeper.ReadOnlyZKClient(139): Connect 0x16ba9192 to 127.0.0.1:58776 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-19 05:15:14,738 DEBUG [RS:1;jenkins-hbase4:37135] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-19 05:15:14,740 DEBUG [RS:1;jenkins-hbase4:37135] zookeeper.ReadOnlyZKClient(139): Connect 0x604ab1ea to 127.0.0.1:58776 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-19 05:15:14,753 DEBUG [RS:0;jenkins-hbase4:38277] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@6e93c1e5, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-19 05:15:14,753 DEBUG [RS:0;jenkins-hbase4:38277] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@6aac48ef, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-19 05:15:14,755 DEBUG [RS:2;jenkins-hbase4:45187] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@1c42fae2, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-19 05:15:14,755 DEBUG [RS:1;jenkins-hbase4:37135] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@6e8d4393, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-19 05:15:14,755 DEBUG [RS:2;jenkins-hbase4:45187] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@5940770e, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-19 05:15:14,755 DEBUG [RS:1;jenkins-hbase4:37135] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@6ab2da9a, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-19 05:15:14,763 DEBUG [RS:0;jenkins-hbase4:38277] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:0;jenkins-hbase4:38277 2023-07-19 05:15:14,763 INFO [RS:0;jenkins-hbase4:38277] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-19 05:15:14,764 INFO [RS:0;jenkins-hbase4:38277] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-19 05:15:14,764 DEBUG [RS:0;jenkins-hbase4:38277] regionserver.HRegionServer(1022): About to register with Master. 2023-07-19 05:15:14,764 DEBUG [RS:1;jenkins-hbase4:37135] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:1;jenkins-hbase4:37135 2023-07-19 05:15:14,764 INFO [RS:1;jenkins-hbase4:37135] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-19 05:15:14,764 INFO [RS:1;jenkins-hbase4:37135] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-19 05:15:14,764 DEBUG [RS:1;jenkins-hbase4:37135] regionserver.HRegionServer(1022): About to register with Master. 2023-07-19 05:15:14,764 DEBUG [RS:2;jenkins-hbase4:45187] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:2;jenkins-hbase4:45187 2023-07-19 05:15:14,764 INFO [RS:0;jenkins-hbase4:38277] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,37145,1689743713754 with isa=jenkins-hbase4.apache.org/172.31.14.131:38277, startcode=1689743713946 2023-07-19 05:15:14,764 INFO [RS:2;jenkins-hbase4:45187] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-19 05:15:14,765 INFO [RS:2;jenkins-hbase4:45187] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-19 05:15:14,765 DEBUG [RS:2;jenkins-hbase4:45187] regionserver.HRegionServer(1022): About to register with Master. 2023-07-19 05:15:14,765 DEBUG [RS:0;jenkins-hbase4:38277] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-19 05:15:14,765 INFO [RS:1;jenkins-hbase4:37135] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,37145,1689743713754 with isa=jenkins-hbase4.apache.org/172.31.14.131:37135, startcode=1689743714109 2023-07-19 05:15:14,765 DEBUG [RS:1;jenkins-hbase4:37135] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-19 05:15:14,765 INFO [RS:2;jenkins-hbase4:45187] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,37145,1689743713754 with isa=jenkins-hbase4.apache.org/172.31.14.131:45187, startcode=1689743714265 2023-07-19 05:15:14,765 DEBUG [RS:2;jenkins-hbase4:45187] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-19 05:15:14,767 INFO [RS-EventLoopGroup-8-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:40725, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.4 (auth:SIMPLE), service=RegionServerStatusService 2023-07-19 05:15:14,777 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=37145] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,38277,1689743713946 2023-07-19 05:15:14,777 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,37145,1689743713754] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-19 05:15:14,778 INFO [RS-EventLoopGroup-8-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:52031, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.5 (auth:SIMPLE), service=RegionServerStatusService 2023-07-19 05:15:14,778 INFO [RS-EventLoopGroup-8-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:40919, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.6 (auth:SIMPLE), service=RegionServerStatusService 2023-07-19 05:15:14,778 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,37145,1689743713754] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 1 2023-07-19 05:15:14,778 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=37145] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,37135,1689743714109 2023-07-19 05:15:14,779 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,37145,1689743713754] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-19 05:15:14,779 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,37145,1689743713754] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 2 2023-07-19 05:15:14,779 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=37145] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,45187,1689743714265 2023-07-19 05:15:14,779 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,37145,1689743713754] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-19 05:15:14,779 DEBUG [RS:0;jenkins-hbase4:38277] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:39859/user/jenkins/test-data/f700d5b1-dd16-c927-b07f-be4667f3482a 2023-07-19 05:15:14,779 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,37145,1689743713754] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 3 2023-07-19 05:15:14,779 DEBUG [RS:0;jenkins-hbase4:38277] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:39859 2023-07-19 05:15:14,779 DEBUG [RS:0;jenkins-hbase4:38277] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=35463 2023-07-19 05:15:14,779 DEBUG [RS:1;jenkins-hbase4:37135] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:39859/user/jenkins/test-data/f700d5b1-dd16-c927-b07f-be4667f3482a 2023-07-19 05:15:14,779 DEBUG [RS:1;jenkins-hbase4:37135] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:39859 2023-07-19 05:15:14,779 DEBUG [RS:1;jenkins-hbase4:37135] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=35463 2023-07-19 05:15:14,782 DEBUG [RS:2;jenkins-hbase4:45187] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:39859/user/jenkins/test-data/f700d5b1-dd16-c927-b07f-be4667f3482a 2023-07-19 05:15:14,782 DEBUG [RS:2;jenkins-hbase4:45187] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:39859 2023-07-19 05:15:14,783 DEBUG [RS:2;jenkins-hbase4:45187] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=35463 2023-07-19 05:15:14,785 DEBUG [Listener at localhost/43345-EventThread] zookeeper.ZKWatcher(600): master:37145-0x1017c0168c00000, quorum=127.0.0.1:58776, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-19 05:15:14,786 DEBUG [RS:1;jenkins-hbase4:37135] zookeeper.ZKUtil(162): regionserver:37135-0x1017c0168c00002, quorum=127.0.0.1:58776, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,37135,1689743714109 2023-07-19 05:15:14,786 DEBUG [RS:0;jenkins-hbase4:38277] zookeeper.ZKUtil(162): regionserver:38277-0x1017c0168c00001, quorum=127.0.0.1:58776, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,38277,1689743713946 2023-07-19 05:15:14,786 WARN [RS:1;jenkins-hbase4:37135] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-19 05:15:14,786 WARN [RS:0;jenkins-hbase4:38277] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-19 05:15:14,786 INFO [RS:1;jenkins-hbase4:37135] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-19 05:15:14,786 INFO [RS:0;jenkins-hbase4:38277] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-19 05:15:14,786 DEBUG [RS:1;jenkins-hbase4:37135] regionserver.HRegionServer(1948): logDir=hdfs://localhost:39859/user/jenkins/test-data/f700d5b1-dd16-c927-b07f-be4667f3482a/WALs/jenkins-hbase4.apache.org,37135,1689743714109 2023-07-19 05:15:14,786 DEBUG [RS:0;jenkins-hbase4:38277] regionserver.HRegionServer(1948): logDir=hdfs://localhost:39859/user/jenkins/test-data/f700d5b1-dd16-c927-b07f-be4667f3482a/WALs/jenkins-hbase4.apache.org,38277,1689743713946 2023-07-19 05:15:14,787 DEBUG [RS:2;jenkins-hbase4:45187] zookeeper.ZKUtil(162): regionserver:45187-0x1017c0168c00003, quorum=127.0.0.1:58776, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,45187,1689743714265 2023-07-19 05:15:14,788 WARN [RS:2;jenkins-hbase4:45187] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-19 05:15:14,788 INFO [RS:2;jenkins-hbase4:45187] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-19 05:15:14,788 DEBUG [RS:2;jenkins-hbase4:45187] regionserver.HRegionServer(1948): logDir=hdfs://localhost:39859/user/jenkins/test-data/f700d5b1-dd16-c927-b07f-be4667f3482a/WALs/jenkins-hbase4.apache.org,45187,1689743714265 2023-07-19 05:15:14,796 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,38277,1689743713946] 2023-07-19 05:15:14,796 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,37135,1689743714109] 2023-07-19 05:15:14,796 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,45187,1689743714265] 2023-07-19 05:15:14,803 DEBUG [RS:0;jenkins-hbase4:38277] zookeeper.ZKUtil(162): regionserver:38277-0x1017c0168c00001, quorum=127.0.0.1:58776, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,38277,1689743713946 2023-07-19 05:15:14,803 DEBUG [RS:1;jenkins-hbase4:37135] zookeeper.ZKUtil(162): regionserver:37135-0x1017c0168c00002, quorum=127.0.0.1:58776, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,38277,1689743713946 2023-07-19 05:15:14,803 DEBUG [RS:2;jenkins-hbase4:45187] zookeeper.ZKUtil(162): regionserver:45187-0x1017c0168c00003, quorum=127.0.0.1:58776, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,38277,1689743713946 2023-07-19 05:15:14,803 DEBUG [RS:0;jenkins-hbase4:38277] zookeeper.ZKUtil(162): regionserver:38277-0x1017c0168c00001, quorum=127.0.0.1:58776, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,37135,1689743714109 2023-07-19 05:15:14,803 DEBUG [RS:1;jenkins-hbase4:37135] zookeeper.ZKUtil(162): regionserver:37135-0x1017c0168c00002, quorum=127.0.0.1:58776, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,37135,1689743714109 2023-07-19 05:15:14,803 DEBUG [RS:2;jenkins-hbase4:45187] zookeeper.ZKUtil(162): regionserver:45187-0x1017c0168c00003, quorum=127.0.0.1:58776, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,37135,1689743714109 2023-07-19 05:15:14,803 DEBUG [RS:0;jenkins-hbase4:38277] zookeeper.ZKUtil(162): regionserver:38277-0x1017c0168c00001, quorum=127.0.0.1:58776, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,45187,1689743714265 2023-07-19 05:15:14,804 DEBUG [RS:1;jenkins-hbase4:37135] zookeeper.ZKUtil(162): regionserver:37135-0x1017c0168c00002, quorum=127.0.0.1:58776, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,45187,1689743714265 2023-07-19 05:15:14,804 DEBUG [RS:2;jenkins-hbase4:45187] zookeeper.ZKUtil(162): regionserver:45187-0x1017c0168c00003, quorum=127.0.0.1:58776, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,45187,1689743714265 2023-07-19 05:15:14,804 DEBUG [RS:0;jenkins-hbase4:38277] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-19 05:15:14,804 INFO [RS:0;jenkins-hbase4:38277] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-19 05:15:14,805 DEBUG [RS:2;jenkins-hbase4:45187] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-19 05:15:14,806 INFO [RS:2;jenkins-hbase4:45187] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-19 05:15:14,806 DEBUG [RS:1;jenkins-hbase4:37135] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-19 05:15:14,806 INFO [RS:0;jenkins-hbase4:38277] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-19 05:15:14,807 INFO [RS:1;jenkins-hbase4:37135] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-19 05:15:14,811 INFO [RS:0;jenkins-hbase4:38277] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-19 05:15:14,811 INFO [RS:0;jenkins-hbase4:38277] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-19 05:15:14,811 INFO [RS:1;jenkins-hbase4:37135] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-19 05:15:14,811 INFO [RS:2;jenkins-hbase4:45187] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-19 05:15:14,811 INFO [RS:0;jenkins-hbase4:38277] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-19 05:15:14,812 INFO [RS:1;jenkins-hbase4:37135] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-19 05:15:14,812 INFO [RS:2;jenkins-hbase4:45187] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-19 05:15:14,812 INFO [RS:1;jenkins-hbase4:37135] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-19 05:15:14,812 INFO [RS:2;jenkins-hbase4:45187] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-19 05:15:14,812 INFO [RS:1;jenkins-hbase4:37135] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-19 05:15:14,814 INFO [RS:2;jenkins-hbase4:45187] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-19 05:15:14,814 INFO [RS:0;jenkins-hbase4:38277] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-19 05:15:14,815 DEBUG [RS:0;jenkins-hbase4:38277] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-19 05:15:14,815 DEBUG [RS:0;jenkins-hbase4:38277] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-19 05:15:14,815 DEBUG [RS:0;jenkins-hbase4:38277] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-19 05:15:14,815 DEBUG [RS:0;jenkins-hbase4:38277] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-19 05:15:14,815 DEBUG [RS:0;jenkins-hbase4:38277] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-19 05:15:14,815 DEBUG [RS:0;jenkins-hbase4:38277] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-19 05:15:14,815 DEBUG [RS:0;jenkins-hbase4:38277] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-19 05:15:14,815 DEBUG [RS:0;jenkins-hbase4:38277] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-19 05:15:14,815 INFO [RS:1;jenkins-hbase4:37135] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-19 05:15:14,815 INFO [RS:2;jenkins-hbase4:45187] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-19 05:15:14,815 DEBUG [RS:0;jenkins-hbase4:38277] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-19 05:15:14,816 DEBUG [RS:1;jenkins-hbase4:37135] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-19 05:15:14,816 DEBUG [RS:0;jenkins-hbase4:38277] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-19 05:15:14,816 DEBUG [RS:2;jenkins-hbase4:45187] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-19 05:15:14,817 DEBUG [RS:1;jenkins-hbase4:37135] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-19 05:15:14,817 DEBUG [RS:2;jenkins-hbase4:45187] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-19 05:15:14,817 DEBUG [RS:1;jenkins-hbase4:37135] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-19 05:15:14,817 DEBUG [RS:2;jenkins-hbase4:45187] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-19 05:15:14,817 DEBUG [RS:1;jenkins-hbase4:37135] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-19 05:15:14,817 DEBUG [RS:2;jenkins-hbase4:45187] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-19 05:15:14,817 DEBUG [RS:1;jenkins-hbase4:37135] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-19 05:15:14,817 DEBUG [RS:2;jenkins-hbase4:45187] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-19 05:15:14,817 DEBUG [RS:1;jenkins-hbase4:37135] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-19 05:15:14,817 DEBUG [RS:2;jenkins-hbase4:45187] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-19 05:15:14,817 DEBUG [RS:1;jenkins-hbase4:37135] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-19 05:15:14,817 DEBUG [RS:2;jenkins-hbase4:45187] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-19 05:15:14,817 DEBUG [RS:1;jenkins-hbase4:37135] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-19 05:15:14,817 DEBUG [RS:2;jenkins-hbase4:45187] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-19 05:15:14,817 DEBUG [RS:1;jenkins-hbase4:37135] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-19 05:15:14,817 DEBUG [RS:2;jenkins-hbase4:45187] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-19 05:15:14,817 DEBUG [RS:1;jenkins-hbase4:37135] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-19 05:15:14,817 DEBUG [RS:2;jenkins-hbase4:45187] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-19 05:15:14,823 INFO [RS:0;jenkins-hbase4:38277] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-19 05:15:14,824 INFO [RS:2;jenkins-hbase4:45187] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-19 05:15:14,824 INFO [RS:0;jenkins-hbase4:38277] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-19 05:15:14,824 INFO [RS:2;jenkins-hbase4:45187] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-19 05:15:14,824 INFO [RS:0;jenkins-hbase4:38277] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-19 05:15:14,824 INFO [RS:2;jenkins-hbase4:45187] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-19 05:15:14,824 INFO [RS:0;jenkins-hbase4:38277] hbase.ChoreService(166): Chore ScheduledChore name=FileSystemUtilizationChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-19 05:15:14,824 INFO [RS:2;jenkins-hbase4:45187] hbase.ChoreService(166): Chore ScheduledChore name=FileSystemUtilizationChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-19 05:15:14,829 INFO [RS:1;jenkins-hbase4:37135] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-19 05:15:14,830 INFO [RS:1;jenkins-hbase4:37135] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-19 05:15:14,830 INFO [RS:1;jenkins-hbase4:37135] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-19 05:15:14,830 INFO [RS:1;jenkins-hbase4:37135] hbase.ChoreService(166): Chore ScheduledChore name=FileSystemUtilizationChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-19 05:15:14,838 INFO [RS:0;jenkins-hbase4:38277] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-19 05:15:14,838 INFO [RS:2;jenkins-hbase4:45187] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-19 05:15:14,838 INFO [RS:0;jenkins-hbase4:38277] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,38277,1689743713946-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-19 05:15:14,838 INFO [RS:2;jenkins-hbase4:45187] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,45187,1689743714265-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-19 05:15:14,841 INFO [RS:1;jenkins-hbase4:37135] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-19 05:15:14,841 INFO [RS:1;jenkins-hbase4:37135] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,37135,1689743714109-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-19 05:15:14,850 INFO [RS:0;jenkins-hbase4:38277] regionserver.Replication(203): jenkins-hbase4.apache.org,38277,1689743713946 started 2023-07-19 05:15:14,850 INFO [RS:0;jenkins-hbase4:38277] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,38277,1689743713946, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:38277, sessionid=0x1017c0168c00001 2023-07-19 05:15:14,850 DEBUG [RS:0;jenkins-hbase4:38277] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-19 05:15:14,850 DEBUG [RS:0;jenkins-hbase4:38277] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,38277,1689743713946 2023-07-19 05:15:14,850 DEBUG [RS:0;jenkins-hbase4:38277] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,38277,1689743713946' 2023-07-19 05:15:14,850 DEBUG [RS:0;jenkins-hbase4:38277] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-19 05:15:14,850 DEBUG [RS:0;jenkins-hbase4:38277] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-19 05:15:14,851 DEBUG [RS:0;jenkins-hbase4:38277] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-19 05:15:14,851 DEBUG [RS:0;jenkins-hbase4:38277] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-19 05:15:14,851 DEBUG [RS:0;jenkins-hbase4:38277] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,38277,1689743713946 2023-07-19 05:15:14,851 DEBUG [RS:0;jenkins-hbase4:38277] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,38277,1689743713946' 2023-07-19 05:15:14,851 DEBUG [RS:0;jenkins-hbase4:38277] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-19 05:15:14,851 DEBUG [RS:0;jenkins-hbase4:38277] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-19 05:15:14,851 DEBUG [RS:0;jenkins-hbase4:38277] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-19 05:15:14,851 INFO [RS:0;jenkins-hbase4:38277] quotas.RegionServerRpcQuotaManager(67): Initializing RPC quota support 2023-07-19 05:15:14,854 INFO [RS:0;jenkins-hbase4:38277] hbase.ChoreService(166): Chore ScheduledChore name=QuotaRefresherChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-19 05:15:14,854 DEBUG [RS:0;jenkins-hbase4:38277] zookeeper.ZKUtil(398): regionserver:38277-0x1017c0168c00001, quorum=127.0.0.1:58776, baseZNode=/hbase Unable to get data of znode /hbase/rpc-throttle because node does not exist (not an error) 2023-07-19 05:15:14,854 INFO [RS:0;jenkins-hbase4:38277] quotas.RegionServerRpcQuotaManager(73): Start rpc quota manager and rpc throttle enabled is true 2023-07-19 05:15:14,854 INFO [RS:2;jenkins-hbase4:45187] regionserver.Replication(203): jenkins-hbase4.apache.org,45187,1689743714265 started 2023-07-19 05:15:14,854 INFO [RS:2;jenkins-hbase4:45187] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,45187,1689743714265, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:45187, sessionid=0x1017c0168c00003 2023-07-19 05:15:14,855 DEBUG [RS:2;jenkins-hbase4:45187] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-19 05:15:14,855 DEBUG [RS:2;jenkins-hbase4:45187] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,45187,1689743714265 2023-07-19 05:15:14,855 DEBUG [RS:2;jenkins-hbase4:45187] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,45187,1689743714265' 2023-07-19 05:15:14,855 DEBUG [RS:2;jenkins-hbase4:45187] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-19 05:15:14,855 INFO [RS:0;jenkins-hbase4:38277] hbase.ChoreService(166): Chore ScheduledChore name=SpaceQuotaRefresherChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-19 05:15:14,855 DEBUG [RS:2;jenkins-hbase4:45187] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-19 05:15:14,855 INFO [RS:0;jenkins-hbase4:38277] hbase.ChoreService(166): Chore ScheduledChore name=RegionSizeReportingChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-19 05:15:14,855 DEBUG [RS:2;jenkins-hbase4:45187] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-19 05:15:14,855 DEBUG [RS:2;jenkins-hbase4:45187] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-19 05:15:14,855 DEBUG [RS:2;jenkins-hbase4:45187] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,45187,1689743714265 2023-07-19 05:15:14,856 DEBUG [RS:2;jenkins-hbase4:45187] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,45187,1689743714265' 2023-07-19 05:15:14,856 DEBUG [RS:2;jenkins-hbase4:45187] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-19 05:15:14,857 DEBUG [RS:2;jenkins-hbase4:45187] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-19 05:15:14,857 INFO [RS:1;jenkins-hbase4:37135] regionserver.Replication(203): jenkins-hbase4.apache.org,37135,1689743714109 started 2023-07-19 05:15:14,857 INFO [RS:1;jenkins-hbase4:37135] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,37135,1689743714109, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:37135, sessionid=0x1017c0168c00002 2023-07-19 05:15:14,857 DEBUG [RS:1;jenkins-hbase4:37135] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-19 05:15:14,857 DEBUG [RS:1;jenkins-hbase4:37135] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,37135,1689743714109 2023-07-19 05:15:14,857 DEBUG [RS:1;jenkins-hbase4:37135] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,37135,1689743714109' 2023-07-19 05:15:14,857 DEBUG [RS:2;jenkins-hbase4:45187] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-19 05:15:14,857 DEBUG [RS:1;jenkins-hbase4:37135] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-19 05:15:14,857 INFO [RS:2;jenkins-hbase4:45187] quotas.RegionServerRpcQuotaManager(67): Initializing RPC quota support 2023-07-19 05:15:14,857 INFO [RS:2;jenkins-hbase4:45187] hbase.ChoreService(166): Chore ScheduledChore name=QuotaRefresherChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-19 05:15:14,857 DEBUG [RS:1;jenkins-hbase4:37135] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-19 05:15:14,858 DEBUG [RS:2;jenkins-hbase4:45187] zookeeper.ZKUtil(398): regionserver:45187-0x1017c0168c00003, quorum=127.0.0.1:58776, baseZNode=/hbase Unable to get data of znode /hbase/rpc-throttle because node does not exist (not an error) 2023-07-19 05:15:14,858 INFO [RS:2;jenkins-hbase4:45187] quotas.RegionServerRpcQuotaManager(73): Start rpc quota manager and rpc throttle enabled is true 2023-07-19 05:15:14,858 INFO [RS:2;jenkins-hbase4:45187] hbase.ChoreService(166): Chore ScheduledChore name=SpaceQuotaRefresherChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-19 05:15:14,858 DEBUG [RS:1;jenkins-hbase4:37135] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-19 05:15:14,858 INFO [RS:2;jenkins-hbase4:45187] hbase.ChoreService(166): Chore ScheduledChore name=RegionSizeReportingChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-19 05:15:14,858 DEBUG [RS:1;jenkins-hbase4:37135] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-19 05:15:14,858 DEBUG [RS:1;jenkins-hbase4:37135] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,37135,1689743714109 2023-07-19 05:15:14,858 DEBUG [RS:1;jenkins-hbase4:37135] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,37135,1689743714109' 2023-07-19 05:15:14,858 DEBUG [RS:1;jenkins-hbase4:37135] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-19 05:15:14,858 DEBUG [RS:1;jenkins-hbase4:37135] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-19 05:15:14,858 DEBUG [RS:1;jenkins-hbase4:37135] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-19 05:15:14,858 INFO [RS:1;jenkins-hbase4:37135] quotas.RegionServerRpcQuotaManager(67): Initializing RPC quota support 2023-07-19 05:15:14,858 INFO [RS:1;jenkins-hbase4:37135] hbase.ChoreService(166): Chore ScheduledChore name=QuotaRefresherChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-19 05:15:14,859 DEBUG [RS:1;jenkins-hbase4:37135] zookeeper.ZKUtil(398): regionserver:37135-0x1017c0168c00002, quorum=127.0.0.1:58776, baseZNode=/hbase Unable to get data of znode /hbase/rpc-throttle because node does not exist (not an error) 2023-07-19 05:15:14,859 INFO [RS:1;jenkins-hbase4:37135] quotas.RegionServerRpcQuotaManager(73): Start rpc quota manager and rpc throttle enabled is true 2023-07-19 05:15:14,859 INFO [RS:1;jenkins-hbase4:37135] hbase.ChoreService(166): Chore ScheduledChore name=SpaceQuotaRefresherChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-19 05:15:14,859 INFO [RS:1;jenkins-hbase4:37135] hbase.ChoreService(166): Chore ScheduledChore name=RegionSizeReportingChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-19 05:15:14,881 DEBUG [jenkins-hbase4:37145] assignment.AssignmentManager(2176): Processing assignQueue; systemServersCount=3, allServersCount=3 2023-07-19 05:15:14,881 DEBUG [jenkins-hbase4:37145] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-19 05:15:14,881 DEBUG [jenkins-hbase4:37145] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-19 05:15:14,881 DEBUG [jenkins-hbase4:37145] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-19 05:15:14,881 DEBUG [jenkins-hbase4:37145] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-19 05:15:14,881 DEBUG [jenkins-hbase4:37145] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-19 05:15:14,883 INFO [PEWorker-3] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,37135,1689743714109, state=OPENING 2023-07-19 05:15:14,884 DEBUG [PEWorker-3] zookeeper.MetaTableLocator(240): hbase:meta region location doesn't exist, create it 2023-07-19 05:15:14,889 DEBUG [Listener at localhost/43345-EventThread] zookeeper.ZKWatcher(600): master:37145-0x1017c0168c00000, quorum=127.0.0.1:58776, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-19 05:15:14,889 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=3, ppid=2, state=RUNNABLE; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,37135,1689743714109}] 2023-07-19 05:15:14,889 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-19 05:15:14,941 WARN [ReadOnlyZKClient-127.0.0.1:58776@0x7a7ac115] client.ZKConnectionRegistry(168): Meta region is in state OPENING 2023-07-19 05:15:14,941 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,37145,1689743713754] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-19 05:15:14,943 INFO [RS-EventLoopGroup-10-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:58766, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-19 05:15:14,944 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=37135] ipc.CallRunner(144): callId: 0 service: ClientService methodName: Get size: 88 connection: 172.31.14.131:58766 deadline: 1689743774943, exception=org.apache.hadoop.hbase.NotServingRegionException: hbase:meta,,1 is not online on jenkins-hbase4.apache.org,37135,1689743714109 2023-07-19 05:15:14,959 INFO [RS:0;jenkins-hbase4:38277] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C38277%2C1689743713946, suffix=, logDir=hdfs://localhost:39859/user/jenkins/test-data/f700d5b1-dd16-c927-b07f-be4667f3482a/WALs/jenkins-hbase4.apache.org,38277,1689743713946, archiveDir=hdfs://localhost:39859/user/jenkins/test-data/f700d5b1-dd16-c927-b07f-be4667f3482a/oldWALs, maxLogs=32 2023-07-19 05:15:14,959 INFO [RS:2;jenkins-hbase4:45187] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C45187%2C1689743714265, suffix=, logDir=hdfs://localhost:39859/user/jenkins/test-data/f700d5b1-dd16-c927-b07f-be4667f3482a/WALs/jenkins-hbase4.apache.org,45187,1689743714265, archiveDir=hdfs://localhost:39859/user/jenkins/test-data/f700d5b1-dd16-c927-b07f-be4667f3482a/oldWALs, maxLogs=32 2023-07-19 05:15:14,960 INFO [RS:1;jenkins-hbase4:37135] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C37135%2C1689743714109, suffix=, logDir=hdfs://localhost:39859/user/jenkins/test-data/f700d5b1-dd16-c927-b07f-be4667f3482a/WALs/jenkins-hbase4.apache.org,37135,1689743714109, archiveDir=hdfs://localhost:39859/user/jenkins/test-data/f700d5b1-dd16-c927-b07f-be4667f3482a/oldWALs, maxLogs=32 2023-07-19 05:15:14,989 DEBUG [RS-EventLoopGroup-11-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:46229,DS-61e74ad5-bf73-4e56-81c0-988bafd59d61,DISK] 2023-07-19 05:15:14,989 DEBUG [RS-EventLoopGroup-11-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:34165,DS-686316b1-d9d1-4e30-b758-fc057cfd6c78,DISK] 2023-07-19 05:15:14,990 DEBUG [RS-EventLoopGroup-11-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:44789,DS-d2fc99e7-d445-45de-b8e7-620647090d5d,DISK] 2023-07-19 05:15:14,991 DEBUG [RS-EventLoopGroup-11-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:34165,DS-686316b1-d9d1-4e30-b758-fc057cfd6c78,DISK] 2023-07-19 05:15:14,991 DEBUG [RS-EventLoopGroup-11-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:44789,DS-d2fc99e7-d445-45de-b8e7-620647090d5d,DISK] 2023-07-19 05:15:14,991 DEBUG [RS-EventLoopGroup-11-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:46229,DS-61e74ad5-bf73-4e56-81c0-988bafd59d61,DISK] 2023-07-19 05:15:14,995 DEBUG [RS-EventLoopGroup-11-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:46229,DS-61e74ad5-bf73-4e56-81c0-988bafd59d61,DISK] 2023-07-19 05:15:14,995 DEBUG [RS-EventLoopGroup-11-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:34165,DS-686316b1-d9d1-4e30-b758-fc057cfd6c78,DISK] 2023-07-19 05:15:14,995 DEBUG [RS-EventLoopGroup-11-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:44789,DS-d2fc99e7-d445-45de-b8e7-620647090d5d,DISK] 2023-07-19 05:15:14,997 INFO [RS:2;jenkins-hbase4:45187] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/f700d5b1-dd16-c927-b07f-be4667f3482a/WALs/jenkins-hbase4.apache.org,45187,1689743714265/jenkins-hbase4.apache.org%2C45187%2C1689743714265.1689743714960 2023-07-19 05:15:14,999 INFO [RS:0;jenkins-hbase4:38277] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/f700d5b1-dd16-c927-b07f-be4667f3482a/WALs/jenkins-hbase4.apache.org,38277,1689743713946/jenkins-hbase4.apache.org%2C38277%2C1689743713946.1689743714960 2023-07-19 05:15:14,999 INFO [RS:1;jenkins-hbase4:37135] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/f700d5b1-dd16-c927-b07f-be4667f3482a/WALs/jenkins-hbase4.apache.org,37135,1689743714109/jenkins-hbase4.apache.org%2C37135%2C1689743714109.1689743714961 2023-07-19 05:15:15,002 DEBUG [RS:2;jenkins-hbase4:45187] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:34165,DS-686316b1-d9d1-4e30-b758-fc057cfd6c78,DISK], DatanodeInfoWithStorage[127.0.0.1:44789,DS-d2fc99e7-d445-45de-b8e7-620647090d5d,DISK], DatanodeInfoWithStorage[127.0.0.1:46229,DS-61e74ad5-bf73-4e56-81c0-988bafd59d61,DISK]] 2023-07-19 05:15:15,003 DEBUG [RS:0;jenkins-hbase4:38277] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:46229,DS-61e74ad5-bf73-4e56-81c0-988bafd59d61,DISK], DatanodeInfoWithStorage[127.0.0.1:44789,DS-d2fc99e7-d445-45de-b8e7-620647090d5d,DISK], DatanodeInfoWithStorage[127.0.0.1:34165,DS-686316b1-d9d1-4e30-b758-fc057cfd6c78,DISK]] 2023-07-19 05:15:15,003 DEBUG [RS:1;jenkins-hbase4:37135] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:44789,DS-d2fc99e7-d445-45de-b8e7-620647090d5d,DISK], DatanodeInfoWithStorage[127.0.0.1:46229,DS-61e74ad5-bf73-4e56-81c0-988bafd59d61,DISK], DatanodeInfoWithStorage[127.0.0.1:34165,DS-686316b1-d9d1-4e30-b758-fc057cfd6c78,DISK]] 2023-07-19 05:15:15,044 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,37135,1689743714109 2023-07-19 05:15:15,046 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-19 05:15:15,047 INFO [RS-EventLoopGroup-10-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:58780, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-19 05:15:15,051 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:meta,,1.1588230740 2023-07-19 05:15:15,051 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-19 05:15:15,053 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C37135%2C1689743714109.meta, suffix=.meta, logDir=hdfs://localhost:39859/user/jenkins/test-data/f700d5b1-dd16-c927-b07f-be4667f3482a/WALs/jenkins-hbase4.apache.org,37135,1689743714109, archiveDir=hdfs://localhost:39859/user/jenkins/test-data/f700d5b1-dd16-c927-b07f-be4667f3482a/oldWALs, maxLogs=32 2023-07-19 05:15:15,067 DEBUG [RS-EventLoopGroup-11-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:34165,DS-686316b1-d9d1-4e30-b758-fc057cfd6c78,DISK] 2023-07-19 05:15:15,068 DEBUG [RS-EventLoopGroup-11-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:44789,DS-d2fc99e7-d445-45de-b8e7-620647090d5d,DISK] 2023-07-19 05:15:15,068 DEBUG [RS-EventLoopGroup-11-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:46229,DS-61e74ad5-bf73-4e56-81c0-988bafd59d61,DISK] 2023-07-19 05:15:15,070 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/f700d5b1-dd16-c927-b07f-be4667f3482a/WALs/jenkins-hbase4.apache.org,37135,1689743714109/jenkins-hbase4.apache.org%2C37135%2C1689743714109.meta.1689743715054.meta 2023-07-19 05:15:15,070 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:34165,DS-686316b1-d9d1-4e30-b758-fc057cfd6c78,DISK], DatanodeInfoWithStorage[127.0.0.1:44789,DS-d2fc99e7-d445-45de-b8e7-620647090d5d,DISK], DatanodeInfoWithStorage[127.0.0.1:46229,DS-61e74ad5-bf73-4e56-81c0-988bafd59d61,DISK]] 2023-07-19 05:15:15,070 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2023-07-19 05:15:15,071 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-19 05:15:15,071 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService 2023-07-19 05:15:15,071 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2023-07-19 05:15:15,071 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table meta 1588230740 2023-07-19 05:15:15,071 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-19 05:15:15,071 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 1588230740 2023-07-19 05:15:15,071 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 1588230740 2023-07-19 05:15:15,073 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-19 05:15:15,074 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:39859/user/jenkins/test-data/f700d5b1-dd16-c927-b07f-be4667f3482a/data/hbase/meta/1588230740/info 2023-07-19 05:15:15,075 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:39859/user/jenkins/test-data/f700d5b1-dd16-c927-b07f-be4667f3482a/data/hbase/meta/1588230740/info 2023-07-19 05:15:15,075 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-19 05:15:15,076 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-19 05:15:15,076 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-19 05:15:15,077 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:39859/user/jenkins/test-data/f700d5b1-dd16-c927-b07f-be4667f3482a/data/hbase/meta/1588230740/rep_barrier 2023-07-19 05:15:15,077 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:39859/user/jenkins/test-data/f700d5b1-dd16-c927-b07f-be4667f3482a/data/hbase/meta/1588230740/rep_barrier 2023-07-19 05:15:15,077 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-19 05:15:15,078 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-19 05:15:15,078 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-19 05:15:15,079 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:39859/user/jenkins/test-data/f700d5b1-dd16-c927-b07f-be4667f3482a/data/hbase/meta/1588230740/table 2023-07-19 05:15:15,079 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:39859/user/jenkins/test-data/f700d5b1-dd16-c927-b07f-be4667f3482a/data/hbase/meta/1588230740/table 2023-07-19 05:15:15,079 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-19 05:15:15,080 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-19 05:15:15,081 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:39859/user/jenkins/test-data/f700d5b1-dd16-c927-b07f-be4667f3482a/data/hbase/meta/1588230740 2023-07-19 05:15:15,082 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:39859/user/jenkins/test-data/f700d5b1-dd16-c927-b07f-be4667f3482a/data/hbase/meta/1588230740 2023-07-19 05:15:15,084 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-19 05:15:15,085 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-19 05:15:15,086 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10404662080, jitterRate=-0.03099033236503601}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-19 05:15:15,086 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-19 05:15:15,087 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:meta,,1.1588230740, pid=3, masterSystemTime=1689743715044 2023-07-19 05:15:15,092 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:meta,,1.1588230740 2023-07-19 05:15:15,092 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:meta,,1.1588230740 2023-07-19 05:15:15,093 INFO [PEWorker-4] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,37135,1689743714109, state=OPEN 2023-07-19 05:15:15,094 DEBUG [Listener at localhost/43345-EventThread] zookeeper.ZKWatcher(600): master:37145-0x1017c0168c00000, quorum=127.0.0.1:58776, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-07-19 05:15:15,094 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-19 05:15:15,095 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=3, resume processing ppid=2 2023-07-19 05:15:15,096 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=3, ppid=2, state=SUCCESS; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,37135,1689743714109 in 205 msec 2023-07-19 05:15:15,097 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=2, resume processing ppid=1 2023-07-19 05:15:15,097 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=2, ppid=1, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN in 375 msec 2023-07-19 05:15:15,098 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=1, state=SUCCESS; InitMetaProcedure table=hbase:meta in 467 msec 2023-07-19 05:15:15,098 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(953): Wait for region servers to report in: status=null, state=RUNNING, startTime=1689743715098, completionTime=-1 2023-07-19 05:15:15,099 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ServerManager(821): Finished waiting on RegionServer count=3; waited=0ms, expected min=3 server(s), max=3 server(s), master is running 2023-07-19 05:15:15,099 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1517): Joining cluster... 2023-07-19 05:15:15,102 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1529): Number of RegionServers=3 2023-07-19 05:15:15,102 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$RegionInTransitionChore; timeout=60000, timestamp=1689743775102 2023-07-19 05:15:15,102 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$DeadServerMetricRegionChore; timeout=120000, timestamp=1689743835102 2023-07-19 05:15:15,102 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1536): Joined the cluster in 3 msec 2023-07-19 05:15:15,108 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,37145,1689743713754-ClusterStatusChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-19 05:15:15,108 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,37145,1689743713754-BalancerChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-19 05:15:15,108 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,37145,1689743713754-RegionNormalizerChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-19 05:15:15,108 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=CatalogJanitor-jenkins-hbase4:37145, period=300000, unit=MILLISECONDS is enabled. 2023-07-19 05:15:15,108 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HbckChore-, period=3600000, unit=MILLISECONDS is enabled. 2023-07-19 05:15:15,108 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.TableNamespaceManager(92): Namespace table not found. Creating... 2023-07-19 05:15:15,108 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2148): Client=null/null create 'hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-07-19 05:15:15,109 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:namespace 2023-07-19 05:15:15,109 DEBUG [master/jenkins-hbase4:0.Chore.1] janitor.CatalogJanitor(175): 2023-07-19 05:15:15,110 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_PRE_OPERATION 2023-07-19 05:15:15,111 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-19 05:15:15,112 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:39859/user/jenkins/test-data/f700d5b1-dd16-c927-b07f-be4667f3482a/.tmp/data/hbase/namespace/0eea252164c18f46cb8ed0e29a81f112 2023-07-19 05:15:15,113 DEBUG [HFileArchiver-3] backup.HFileArchiver(153): Directory hdfs://localhost:39859/user/jenkins/test-data/f700d5b1-dd16-c927-b07f-be4667f3482a/.tmp/data/hbase/namespace/0eea252164c18f46cb8ed0e29a81f112 empty. 2023-07-19 05:15:15,113 DEBUG [HFileArchiver-3] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:39859/user/jenkins/test-data/f700d5b1-dd16-c927-b07f-be4667f3482a/.tmp/data/hbase/namespace/0eea252164c18f46cb8ed0e29a81f112 2023-07-19 05:15:15,113 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived hbase:namespace regions 2023-07-19 05:15:15,125 DEBUG [PEWorker-3] util.FSTableDescriptors(570): Wrote into hdfs://localhost:39859/user/jenkins/test-data/f700d5b1-dd16-c927-b07f-be4667f3482a/.tmp/data/hbase/namespace/.tabledesc/.tableinfo.0000000001 2023-07-19 05:15:15,127 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(7675): creating {ENCODED => 0eea252164c18f46cb8ed0e29a81f112, NAME => 'hbase:namespace,,1689743715108.0eea252164c18f46cb8ed0e29a81f112.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:39859/user/jenkins/test-data/f700d5b1-dd16-c927-b07f-be4667f3482a/.tmp 2023-07-19 05:15:15,137 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1689743715108.0eea252164c18f46cb8ed0e29a81f112.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-19 05:15:15,137 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1604): Closing 0eea252164c18f46cb8ed0e29a81f112, disabling compactions & flushes 2023-07-19 05:15:15,137 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1689743715108.0eea252164c18f46cb8ed0e29a81f112. 2023-07-19 05:15:15,137 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1689743715108.0eea252164c18f46cb8ed0e29a81f112. 2023-07-19 05:15:15,137 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1689743715108.0eea252164c18f46cb8ed0e29a81f112. after waiting 0 ms 2023-07-19 05:15:15,138 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1689743715108.0eea252164c18f46cb8ed0e29a81f112. 2023-07-19 05:15:15,138 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1838): Closed hbase:namespace,,1689743715108.0eea252164c18f46cb8ed0e29a81f112. 2023-07-19 05:15:15,138 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1558): Region close journal for 0eea252164c18f46cb8ed0e29a81f112: 2023-07-19 05:15:15,140 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ADD_TO_META 2023-07-19 05:15:15,140 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:namespace,,1689743715108.0eea252164c18f46cb8ed0e29a81f112.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689743715140"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689743715140"}]},"ts":"1689743715140"} 2023-07-19 05:15:15,143 INFO [PEWorker-3] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-19 05:15:15,143 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-19 05:15:15,144 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689743715143"}]},"ts":"1689743715143"} 2023-07-19 05:15:15,144 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLING in hbase:meta 2023-07-19 05:15:15,148 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-19 05:15:15,148 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-19 05:15:15,148 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-19 05:15:15,148 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-19 05:15:15,148 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-19 05:15:15,149 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=0eea252164c18f46cb8ed0e29a81f112, ASSIGN}] 2023-07-19 05:15:15,152 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=0eea252164c18f46cb8ed0e29a81f112, ASSIGN 2023-07-19 05:15:15,153 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:namespace, region=0eea252164c18f46cb8ed0e29a81f112, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,37135,1689743714109; forceNewPlan=false, retain=false 2023-07-19 05:15:15,247 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,37145,1689743713754] master.HMaster(2148): Client=null/null create 'hbase:rsgroup', {TABLE_ATTRIBUTES => {coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|', METADATA => {'SPLIT_POLICY' => 'org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy'}}}, {NAME => 'm', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-19 05:15:15,249 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,37145,1689743713754] procedure2.ProcedureExecutor(1029): Stored pid=6, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:rsgroup 2023-07-19 05:15:15,250 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_PRE_OPERATION 2023-07-19 05:15:15,251 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-19 05:15:15,253 DEBUG [HFileArchiver-6] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:39859/user/jenkins/test-data/f700d5b1-dd16-c927-b07f-be4667f3482a/.tmp/data/hbase/rsgroup/f43c0f30126da755511ab46ca11cb56e 2023-07-19 05:15:15,253 DEBUG [HFileArchiver-6] backup.HFileArchiver(153): Directory hdfs://localhost:39859/user/jenkins/test-data/f700d5b1-dd16-c927-b07f-be4667f3482a/.tmp/data/hbase/rsgroup/f43c0f30126da755511ab46ca11cb56e empty. 2023-07-19 05:15:15,254 DEBUG [HFileArchiver-6] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:39859/user/jenkins/test-data/f700d5b1-dd16-c927-b07f-be4667f3482a/.tmp/data/hbase/rsgroup/f43c0f30126da755511ab46ca11cb56e 2023-07-19 05:15:15,254 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(328): Archived hbase:rsgroup regions 2023-07-19 05:15:15,266 DEBUG [PEWorker-4] util.FSTableDescriptors(570): Wrote into hdfs://localhost:39859/user/jenkins/test-data/f700d5b1-dd16-c927-b07f-be4667f3482a/.tmp/data/hbase/rsgroup/.tabledesc/.tableinfo.0000000001 2023-07-19 05:15:15,267 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(7675): creating {ENCODED => f43c0f30126da755511ab46ca11cb56e, NAME => 'hbase:rsgroup,,1689743715247.f43c0f30126da755511ab46ca11cb56e.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:rsgroup', {TABLE_ATTRIBUTES => {coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|', METADATA => {'SPLIT_POLICY' => 'org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy'}}}, {NAME => 'm', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:39859/user/jenkins/test-data/f700d5b1-dd16-c927-b07f-be4667f3482a/.tmp 2023-07-19 05:15:15,276 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1689743715247.f43c0f30126da755511ab46ca11cb56e.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-19 05:15:15,276 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1604): Closing f43c0f30126da755511ab46ca11cb56e, disabling compactions & flushes 2023-07-19 05:15:15,276 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1689743715247.f43c0f30126da755511ab46ca11cb56e. 2023-07-19 05:15:15,276 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1689743715247.f43c0f30126da755511ab46ca11cb56e. 2023-07-19 05:15:15,276 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1689743715247.f43c0f30126da755511ab46ca11cb56e. after waiting 0 ms 2023-07-19 05:15:15,276 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1689743715247.f43c0f30126da755511ab46ca11cb56e. 2023-07-19 05:15:15,276 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1689743715247.f43c0f30126da755511ab46ca11cb56e. 2023-07-19 05:15:15,276 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1558): Region close journal for f43c0f30126da755511ab46ca11cb56e: 2023-07-19 05:15:15,279 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_ADD_TO_META 2023-07-19 05:15:15,280 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:rsgroup,,1689743715247.f43c0f30126da755511ab46ca11cb56e.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689743715279"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689743715279"}]},"ts":"1689743715279"} 2023-07-19 05:15:15,281 INFO [PEWorker-4] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-19 05:15:15,282 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-19 05:15:15,282 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:rsgroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689743715282"}]},"ts":"1689743715282"} 2023-07-19 05:15:15,283 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=hbase:rsgroup, state=ENABLING in hbase:meta 2023-07-19 05:15:15,286 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-19 05:15:15,286 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-19 05:15:15,286 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-19 05:15:15,286 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-19 05:15:15,286 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-19 05:15:15,287 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=7, ppid=6, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:rsgroup, region=f43c0f30126da755511ab46ca11cb56e, ASSIGN}] 2023-07-19 05:15:15,287 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=7, ppid=6, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:rsgroup, region=f43c0f30126da755511ab46ca11cb56e, ASSIGN 2023-07-19 05:15:15,288 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=7, ppid=6, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:rsgroup, region=f43c0f30126da755511ab46ca11cb56e, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,37135,1689743714109; forceNewPlan=false, retain=false 2023-07-19 05:15:15,288 INFO [jenkins-hbase4:37145] balancer.BaseLoadBalancer(1545): Reassigned 2 regions. 2 retained the pre-restart assignment. 2023-07-19 05:15:15,290 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=0eea252164c18f46cb8ed0e29a81f112, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,37135,1689743714109 2023-07-19 05:15:15,290 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1689743715108.0eea252164c18f46cb8ed0e29a81f112.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689743715290"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689743715290"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689743715290"}]},"ts":"1689743715290"} 2023-07-19 05:15:15,291 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=7 updating hbase:meta row=f43c0f30126da755511ab46ca11cb56e, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,37135,1689743714109 2023-07-19 05:15:15,291 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:rsgroup,,1689743715247.f43c0f30126da755511ab46ca11cb56e.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689743715291"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689743715291"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689743715291"}]},"ts":"1689743715291"} 2023-07-19 05:15:15,291 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=8, ppid=5, state=RUNNABLE; OpenRegionProcedure 0eea252164c18f46cb8ed0e29a81f112, server=jenkins-hbase4.apache.org,37135,1689743714109}] 2023-07-19 05:15:15,292 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=9, ppid=7, state=RUNNABLE; OpenRegionProcedure f43c0f30126da755511ab46ca11cb56e, server=jenkins-hbase4.apache.org,37135,1689743714109}] 2023-07-19 05:15:15,447 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:rsgroup,,1689743715247.f43c0f30126da755511ab46ca11cb56e. 2023-07-19 05:15:15,447 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => f43c0f30126da755511ab46ca11cb56e, NAME => 'hbase:rsgroup,,1689743715247.f43c0f30126da755511ab46ca11cb56e.', STARTKEY => '', ENDKEY => ''} 2023-07-19 05:15:15,448 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-19 05:15:15,448 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:rsgroup,,1689743715247.f43c0f30126da755511ab46ca11cb56e. service=MultiRowMutationService 2023-07-19 05:15:15,448 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:rsgroup successfully. 2023-07-19 05:15:15,448 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table rsgroup f43c0f30126da755511ab46ca11cb56e 2023-07-19 05:15:15,448 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1689743715247.f43c0f30126da755511ab46ca11cb56e.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-19 05:15:15,448 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for f43c0f30126da755511ab46ca11cb56e 2023-07-19 05:15:15,448 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for f43c0f30126da755511ab46ca11cb56e 2023-07-19 05:15:15,450 INFO [StoreOpener-f43c0f30126da755511ab46ca11cb56e-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family m of region f43c0f30126da755511ab46ca11cb56e 2023-07-19 05:15:15,451 DEBUG [StoreOpener-f43c0f30126da755511ab46ca11cb56e-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:39859/user/jenkins/test-data/f700d5b1-dd16-c927-b07f-be4667f3482a/data/hbase/rsgroup/f43c0f30126da755511ab46ca11cb56e/m 2023-07-19 05:15:15,451 DEBUG [StoreOpener-f43c0f30126da755511ab46ca11cb56e-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:39859/user/jenkins/test-data/f700d5b1-dd16-c927-b07f-be4667f3482a/data/hbase/rsgroup/f43c0f30126da755511ab46ca11cb56e/m 2023-07-19 05:15:15,451 INFO [StoreOpener-f43c0f30126da755511ab46ca11cb56e-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region f43c0f30126da755511ab46ca11cb56e columnFamilyName m 2023-07-19 05:15:15,452 INFO [StoreOpener-f43c0f30126da755511ab46ca11cb56e-1] regionserver.HStore(310): Store=f43c0f30126da755511ab46ca11cb56e/m, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-19 05:15:15,453 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:39859/user/jenkins/test-data/f700d5b1-dd16-c927-b07f-be4667f3482a/data/hbase/rsgroup/f43c0f30126da755511ab46ca11cb56e 2023-07-19 05:15:15,453 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:39859/user/jenkins/test-data/f700d5b1-dd16-c927-b07f-be4667f3482a/data/hbase/rsgroup/f43c0f30126da755511ab46ca11cb56e 2023-07-19 05:15:15,456 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for f43c0f30126da755511ab46ca11cb56e 2023-07-19 05:15:15,458 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:39859/user/jenkins/test-data/f700d5b1-dd16-c927-b07f-be4667f3482a/data/hbase/rsgroup/f43c0f30126da755511ab46ca11cb56e/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-19 05:15:15,458 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened f43c0f30126da755511ab46ca11cb56e; next sequenceid=2; org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy@4f5cce94, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-19 05:15:15,458 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for f43c0f30126da755511ab46ca11cb56e: 2023-07-19 05:15:15,459 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:rsgroup,,1689743715247.f43c0f30126da755511ab46ca11cb56e., pid=9, masterSystemTime=1689743715443 2023-07-19 05:15:15,461 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:rsgroup,,1689743715247.f43c0f30126da755511ab46ca11cb56e. 2023-07-19 05:15:15,461 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:rsgroup,,1689743715247.f43c0f30126da755511ab46ca11cb56e. 2023-07-19 05:15:15,461 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:namespace,,1689743715108.0eea252164c18f46cb8ed0e29a81f112. 2023-07-19 05:15:15,462 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 0eea252164c18f46cb8ed0e29a81f112, NAME => 'hbase:namespace,,1689743715108.0eea252164c18f46cb8ed0e29a81f112.', STARTKEY => '', ENDKEY => ''} 2023-07-19 05:15:15,462 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=7 updating hbase:meta row=f43c0f30126da755511ab46ca11cb56e, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,37135,1689743714109 2023-07-19 05:15:15,462 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table namespace 0eea252164c18f46cb8ed0e29a81f112 2023-07-19 05:15:15,462 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1689743715108.0eea252164c18f46cb8ed0e29a81f112.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-19 05:15:15,462 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:rsgroup,,1689743715247.f43c0f30126da755511ab46ca11cb56e.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689743715462"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689743715462"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689743715462"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689743715462"}]},"ts":"1689743715462"} 2023-07-19 05:15:15,462 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 0eea252164c18f46cb8ed0e29a81f112 2023-07-19 05:15:15,462 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 0eea252164c18f46cb8ed0e29a81f112 2023-07-19 05:15:15,463 INFO [StoreOpener-0eea252164c18f46cb8ed0e29a81f112-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 0eea252164c18f46cb8ed0e29a81f112 2023-07-19 05:15:15,465 DEBUG [StoreOpener-0eea252164c18f46cb8ed0e29a81f112-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:39859/user/jenkins/test-data/f700d5b1-dd16-c927-b07f-be4667f3482a/data/hbase/namespace/0eea252164c18f46cb8ed0e29a81f112/info 2023-07-19 05:15:15,465 DEBUG [StoreOpener-0eea252164c18f46cb8ed0e29a81f112-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:39859/user/jenkins/test-data/f700d5b1-dd16-c927-b07f-be4667f3482a/data/hbase/namespace/0eea252164c18f46cb8ed0e29a81f112/info 2023-07-19 05:15:15,465 INFO [StoreOpener-0eea252164c18f46cb8ed0e29a81f112-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 0eea252164c18f46cb8ed0e29a81f112 columnFamilyName info 2023-07-19 05:15:15,465 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=9, resume processing ppid=7 2023-07-19 05:15:15,465 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=9, ppid=7, state=SUCCESS; OpenRegionProcedure f43c0f30126da755511ab46ca11cb56e, server=jenkins-hbase4.apache.org,37135,1689743714109 in 171 msec 2023-07-19 05:15:15,466 INFO [StoreOpener-0eea252164c18f46cb8ed0e29a81f112-1] regionserver.HStore(310): Store=0eea252164c18f46cb8ed0e29a81f112/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-19 05:15:15,466 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:39859/user/jenkins/test-data/f700d5b1-dd16-c927-b07f-be4667f3482a/data/hbase/namespace/0eea252164c18f46cb8ed0e29a81f112 2023-07-19 05:15:15,467 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:39859/user/jenkins/test-data/f700d5b1-dd16-c927-b07f-be4667f3482a/data/hbase/namespace/0eea252164c18f46cb8ed0e29a81f112 2023-07-19 05:15:15,467 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=7, resume processing ppid=6 2023-07-19 05:15:15,467 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=7, ppid=6, state=SUCCESS; TransitRegionStateProcedure table=hbase:rsgroup, region=f43c0f30126da755511ab46ca11cb56e, ASSIGN in 178 msec 2023-07-19 05:15:15,468 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-19 05:15:15,468 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:rsgroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689743715468"}]},"ts":"1689743715468"} 2023-07-19 05:15:15,469 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=hbase:rsgroup, state=ENABLED in hbase:meta 2023-07-19 05:15:15,470 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 0eea252164c18f46cb8ed0e29a81f112 2023-07-19 05:15:15,471 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:39859/user/jenkins/test-data/f700d5b1-dd16-c927-b07f-be4667f3482a/data/hbase/namespace/0eea252164c18f46cb8ed0e29a81f112/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-19 05:15:15,472 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 0eea252164c18f46cb8ed0e29a81f112; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10767447040, jitterRate=0.002796649932861328}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-19 05:15:15,472 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 0eea252164c18f46cb8ed0e29a81f112: 2023-07-19 05:15:15,472 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_POST_OPERATION 2023-07-19 05:15:15,473 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:namespace,,1689743715108.0eea252164c18f46cb8ed0e29a81f112., pid=8, masterSystemTime=1689743715443 2023-07-19 05:15:15,474 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=6, state=SUCCESS; CreateTableProcedure table=hbase:rsgroup in 225 msec 2023-07-19 05:15:15,474 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:namespace,,1689743715108.0eea252164c18f46cb8ed0e29a81f112. 2023-07-19 05:15:15,474 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:namespace,,1689743715108.0eea252164c18f46cb8ed0e29a81f112. 2023-07-19 05:15:15,475 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=0eea252164c18f46cb8ed0e29a81f112, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,37135,1689743714109 2023-07-19 05:15:15,475 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:namespace,,1689743715108.0eea252164c18f46cb8ed0e29a81f112.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689743715474"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689743715474"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689743715474"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689743715474"}]},"ts":"1689743715474"} 2023-07-19 05:15:15,477 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=8, resume processing ppid=5 2023-07-19 05:15:15,477 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=8, ppid=5, state=SUCCESS; OpenRegionProcedure 0eea252164c18f46cb8ed0e29a81f112, server=jenkins-hbase4.apache.org,37135,1689743714109 in 185 msec 2023-07-19 05:15:15,478 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=5, resume processing ppid=4 2023-07-19 05:15:15,479 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=5, ppid=4, state=SUCCESS; TransitRegionStateProcedure table=hbase:namespace, region=0eea252164c18f46cb8ed0e29a81f112, ASSIGN in 328 msec 2023-07-19 05:15:15,479 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-19 05:15:15,479 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689743715479"}]},"ts":"1689743715479"} 2023-07-19 05:15:15,480 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLED in hbase:meta 2023-07-19 05:15:15,482 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_POST_OPERATION 2023-07-19 05:15:15,483 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=4, state=SUCCESS; CreateTableProcedure table=hbase:namespace in 374 msec 2023-07-19 05:15:15,510 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:37145-0x1017c0168c00000, quorum=127.0.0.1:58776, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/namespace 2023-07-19 05:15:15,511 DEBUG [Listener at localhost/43345-EventThread] zookeeper.ZKWatcher(600): master:37145-0x1017c0168c00000, quorum=127.0.0.1:58776, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/namespace 2023-07-19 05:15:15,511 DEBUG [Listener at localhost/43345-EventThread] zookeeper.ZKWatcher(600): master:37145-0x1017c0168c00000, quorum=127.0.0.1:58776, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-19 05:15:15,516 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=10, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=default 2023-07-19 05:15:15,523 DEBUG [Listener at localhost/43345-EventThread] zookeeper.ZKWatcher(600): master:37145-0x1017c0168c00000, quorum=127.0.0.1:58776, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-19 05:15:15,527 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=10, state=SUCCESS; CreateNamespaceProcedure, namespace=default in 10 msec 2023-07-19 05:15:15,538 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=11, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=hbase 2023-07-19 05:15:15,544 DEBUG [Listener at localhost/43345-EventThread] zookeeper.ZKWatcher(600): master:37145-0x1017c0168c00000, quorum=127.0.0.1:58776, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-19 05:15:15,547 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=11, state=SUCCESS; CreateNamespaceProcedure, namespace=hbase in 9 msec 2023-07-19 05:15:15,552 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,37145,1689743713754] rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker(840): RSGroup table=hbase:rsgroup is online, refreshing cached information 2023-07-19 05:15:15,552 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,37145,1689743713754] rsgroup.RSGroupInfoManagerImpl(534): Refreshing in Online mode. 2023-07-19 05:15:15,553 DEBUG [Listener at localhost/43345-EventThread] zookeeper.ZKWatcher(600): master:37145-0x1017c0168c00000, quorum=127.0.0.1:58776, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/default 2023-07-19 05:15:15,555 DEBUG [Listener at localhost/43345-EventThread] zookeeper.ZKWatcher(600): master:37145-0x1017c0168c00000, quorum=127.0.0.1:58776, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/hbase 2023-07-19 05:15:15,555 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1083): Master has completed initialization 1.122sec 2023-07-19 05:15:15,555 INFO [master/jenkins-hbase4:0:becomeActiveMaster] quotas.MasterQuotaManager(103): Quota table not found. Creating... 2023-07-19 05:15:15,556 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2148): Client=null/null create 'hbase:quota', {NAME => 'q', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'u', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-19 05:15:15,556 DEBUG [Listener at localhost/43345-EventThread] zookeeper.ZKWatcher(600): master:37145-0x1017c0168c00000, quorum=127.0.0.1:58776, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-19 05:15:15,556 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,37145,1689743713754] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 05:15:15,556 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=12, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:quota 2023-07-19 05:15:15,556 INFO [master/jenkins-hbase4:0:becomeActiveMaster] quotas.MasterQuotaManager(107): Initializing quota support 2023-07-19 05:15:15,558 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:quota execute state=CREATE_TABLE_PRE_OPERATION 2023-07-19 05:15:15,559 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:quota execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-19 05:15:15,560 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,37145,1689743713754] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-19 05:15:15,560 INFO [master/jenkins-hbase4:0:becomeActiveMaster] namespace.NamespaceStateManager(59): Namespace State Manager started. 2023-07-19 05:15:15,561 DEBUG [HFileArchiver-8] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:39859/user/jenkins/test-data/f700d5b1-dd16-c927-b07f-be4667f3482a/.tmp/data/hbase/quota/b270606e2347483ed4a3ed3bf6ac8685 2023-07-19 05:15:15,561 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,37145,1689743713754] rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker(823): GroupBasedLoadBalancer is now online 2023-07-19 05:15:15,562 DEBUG [HFileArchiver-8] backup.HFileArchiver(153): Directory hdfs://localhost:39859/user/jenkins/test-data/f700d5b1-dd16-c927-b07f-be4667f3482a/.tmp/data/hbase/quota/b270606e2347483ed4a3ed3bf6ac8685 empty. 2023-07-19 05:15:15,562 DEBUG [HFileArchiver-8] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:39859/user/jenkins/test-data/f700d5b1-dd16-c927-b07f-be4667f3482a/.tmp/data/hbase/quota/b270606e2347483ed4a3ed3bf6ac8685 2023-07-19 05:15:15,562 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(328): Archived hbase:quota regions 2023-07-19 05:15:15,566 INFO [master/jenkins-hbase4:0:becomeActiveMaster] namespace.NamespaceStateManager(222): Finished updating state of 2 namespaces. 2023-07-19 05:15:15,566 INFO [master/jenkins-hbase4:0:becomeActiveMaster] namespace.NamespaceAuditor(50): NamespaceAuditor started. 2023-07-19 05:15:15,569 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=QuotaObserverChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-19 05:15:15,570 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=QuotaObserverChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-19 05:15:15,570 INFO [master/jenkins-hbase4:0:becomeActiveMaster] slowlog.SlowLogMasterService(57): Slow/Large requests logging to system table hbase:slowlog is disabled. Quitting. 2023-07-19 05:15:15,570 INFO [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKWatcher(269): not a secure deployment, proceeding 2023-07-19 05:15:15,570 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,37145,1689743713754-ExpiredMobFileCleanerChore, period=86400, unit=SECONDS is enabled. 2023-07-19 05:15:15,570 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,37145,1689743713754-MobCompactionChore, period=604800, unit=SECONDS is enabled. 2023-07-19 05:15:15,571 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1175): Balancer post startup initialization complete, took 0 seconds 2023-07-19 05:15:15,584 DEBUG [PEWorker-5] util.FSTableDescriptors(570): Wrote into hdfs://localhost:39859/user/jenkins/test-data/f700d5b1-dd16-c927-b07f-be4667f3482a/.tmp/data/hbase/quota/.tabledesc/.tableinfo.0000000001 2023-07-19 05:15:15,585 INFO [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(7675): creating {ENCODED => b270606e2347483ed4a3ed3bf6ac8685, NAME => 'hbase:quota,,1689743715555.b270606e2347483ed4a3ed3bf6ac8685.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:quota', {NAME => 'q', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'u', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:39859/user/jenkins/test-data/f700d5b1-dd16-c927-b07f-be4667f3482a/.tmp 2023-07-19 05:15:15,600 DEBUG [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(866): Instantiated hbase:quota,,1689743715555.b270606e2347483ed4a3ed3bf6ac8685.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-19 05:15:15,600 DEBUG [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1604): Closing b270606e2347483ed4a3ed3bf6ac8685, disabling compactions & flushes 2023-07-19 05:15:15,600 INFO [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1626): Closing region hbase:quota,,1689743715555.b270606e2347483ed4a3ed3bf6ac8685. 2023-07-19 05:15:15,600 DEBUG [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:quota,,1689743715555.b270606e2347483ed4a3ed3bf6ac8685. 2023-07-19 05:15:15,600 DEBUG [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:quota,,1689743715555.b270606e2347483ed4a3ed3bf6ac8685. after waiting 0 ms 2023-07-19 05:15:15,600 DEBUG [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:quota,,1689743715555.b270606e2347483ed4a3ed3bf6ac8685. 2023-07-19 05:15:15,600 INFO [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1838): Closed hbase:quota,,1689743715555.b270606e2347483ed4a3ed3bf6ac8685. 2023-07-19 05:15:15,600 DEBUG [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1558): Region close journal for b270606e2347483ed4a3ed3bf6ac8685: 2023-07-19 05:15:15,603 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:quota execute state=CREATE_TABLE_ADD_TO_META 2023-07-19 05:15:15,603 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:quota,,1689743715555.b270606e2347483ed4a3ed3bf6ac8685.","families":{"info":[{"qualifier":"regioninfo","vlen":37,"tag":[],"timestamp":"1689743715603"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689743715603"}]},"ts":"1689743715603"} 2023-07-19 05:15:15,605 INFO [PEWorker-5] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-19 05:15:15,606 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:quota execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-19 05:15:15,606 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:quota","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689743715606"}]},"ts":"1689743715606"} 2023-07-19 05:15:15,608 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=hbase:quota, state=ENABLING in hbase:meta 2023-07-19 05:15:15,611 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-19 05:15:15,611 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-19 05:15:15,611 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-19 05:15:15,611 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-19 05:15:15,611 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-19 05:15:15,611 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=13, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:quota, region=b270606e2347483ed4a3ed3bf6ac8685, ASSIGN}] 2023-07-19 05:15:15,612 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=13, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:quota, region=b270606e2347483ed4a3ed3bf6ac8685, ASSIGN 2023-07-19 05:15:15,613 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=13, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:quota, region=b270606e2347483ed4a3ed3bf6ac8685, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,38277,1689743713946; forceNewPlan=false, retain=false 2023-07-19 05:15:15,616 DEBUG [Listener at localhost/43345] zookeeper.ReadOnlyZKClient(139): Connect 0x7c5dfa12 to 127.0.0.1:58776 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-19 05:15:15,625 DEBUG [Listener at localhost/43345] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@2b7448e1, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-19 05:15:15,626 DEBUG [hconnection-0x24130d34-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-19 05:15:15,629 INFO [RS-EventLoopGroup-10-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:58788, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-19 05:15:15,630 INFO [Listener at localhost/43345] hbase.HBaseTestingUtility(1145): Minicluster is up; activeMaster=jenkins-hbase4.apache.org,37145,1689743713754 2023-07-19 05:15:15,630 INFO [Listener at localhost/43345] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-19 05:15:15,633 DEBUG [Listener at localhost/43345] ipc.RpcConnection(124): Using SIMPLE authentication for service=MasterService, sasl=false 2023-07-19 05:15:15,635 INFO [RS-EventLoopGroup-8-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:51838, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=MasterService 2023-07-19 05:15:15,638 DEBUG [Listener at localhost/43345-EventThread] zookeeper.ZKWatcher(600): master:37145-0x1017c0168c00000, quorum=127.0.0.1:58776, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/balancer 2023-07-19 05:15:15,638 DEBUG [Listener at localhost/43345-EventThread] zookeeper.ZKWatcher(600): master:37145-0x1017c0168c00000, quorum=127.0.0.1:58776, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-19 05:15:15,639 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37145] master.MasterRpcServices(492): Client=jenkins//172.31.14.131 set balanceSwitch=false 2023-07-19 05:15:15,639 DEBUG [Listener at localhost/43345] zookeeper.ReadOnlyZKClient(139): Connect 0x49b7c08e to 127.0.0.1:58776 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-19 05:15:15,644 DEBUG [Listener at localhost/43345] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@75f10e21, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-19 05:15:15,645 INFO [Listener at localhost/43345] zookeeper.RecoverableZooKeeper(93): Process identifier=VerifyingRSGroupAdminClient connecting to ZooKeeper ensemble=127.0.0.1:58776 2023-07-19 05:15:15,648 DEBUG [Listener at localhost/43345-EventThread] zookeeper.ZKWatcher(600): VerifyingRSGroupAdminClient0x0, quorum=127.0.0.1:58776, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-19 05:15:15,650 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): VerifyingRSGroupAdminClient-0x1017c0168c0000a connected 2023-07-19 05:15:15,652 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37145] master.HMaster$15(3014): Client=jenkins//172.31.14.131 creating {NAME => 'np1', hbase.namespace.quota.maxregions => '5', hbase.namespace.quota.maxtables => '2'} 2023-07-19 05:15:15,654 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37145] procedure2.ProcedureExecutor(1029): Stored pid=14, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=np1 2023-07-19 05:15:15,659 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37145] master.MasterRpcServices(1230): Checking to see if procedure is done pid=14 2023-07-19 05:15:15,664 DEBUG [Listener at localhost/43345-EventThread] zookeeper.ZKWatcher(600): master:37145-0x1017c0168c00000, quorum=127.0.0.1:58776, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-19 05:15:15,666 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=14, state=SUCCESS; CreateNamespaceProcedure, namespace=np1 in 12 msec 2023-07-19 05:15:15,760 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37145] master.MasterRpcServices(1230): Checking to see if procedure is done pid=14 2023-07-19 05:15:15,763 INFO [jenkins-hbase4:37145] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-19 05:15:15,764 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=13 updating hbase:meta row=b270606e2347483ed4a3ed3bf6ac8685, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,38277,1689743713946 2023-07-19 05:15:15,764 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:quota,,1689743715555.b270606e2347483ed4a3ed3bf6ac8685.","families":{"info":[{"qualifier":"regioninfo","vlen":37,"tag":[],"timestamp":"1689743715764"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689743715764"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689743715764"}]},"ts":"1689743715764"} 2023-07-19 05:15:15,766 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=15, ppid=13, state=RUNNABLE; OpenRegionProcedure b270606e2347483ed4a3ed3bf6ac8685, server=jenkins-hbase4.apache.org,38277,1689743713946}] 2023-07-19 05:15:15,766 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37145] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'np1:table1', {NAME => 'fam1', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-19 05:15:15,768 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37145] procedure2.ProcedureExecutor(1029): Stored pid=16, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=np1:table1 2023-07-19 05:15:15,769 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=16, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=np1:table1 execute state=CREATE_TABLE_PRE_OPERATION 2023-07-19 05:15:15,770 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37145] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "np1" qualifier: "table1" procId is: 16 2023-07-19 05:15:15,770 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37145] master.MasterRpcServices(1230): Checking to see if procedure is done pid=16 2023-07-19 05:15:15,771 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 05:15:15,772 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-19 05:15:15,775 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=16, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=np1:table1 execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-19 05:15:15,777 DEBUG [HFileArchiver-7] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:39859/user/jenkins/test-data/f700d5b1-dd16-c927-b07f-be4667f3482a/.tmp/data/np1/table1/855beb39135ca8ab514e758e736983c6 2023-07-19 05:15:15,777 DEBUG [HFileArchiver-7] backup.HFileArchiver(153): Directory hdfs://localhost:39859/user/jenkins/test-data/f700d5b1-dd16-c927-b07f-be4667f3482a/.tmp/data/np1/table1/855beb39135ca8ab514e758e736983c6 empty. 2023-07-19 05:15:15,778 DEBUG [HFileArchiver-7] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:39859/user/jenkins/test-data/f700d5b1-dd16-c927-b07f-be4667f3482a/.tmp/data/np1/table1/855beb39135ca8ab514e758e736983c6 2023-07-19 05:15:15,778 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(328): Archived np1:table1 regions 2023-07-19 05:15:15,792 DEBUG [PEWorker-5] util.FSTableDescriptors(570): Wrote into hdfs://localhost:39859/user/jenkins/test-data/f700d5b1-dd16-c927-b07f-be4667f3482a/.tmp/data/np1/table1/.tabledesc/.tableinfo.0000000001 2023-07-19 05:15:15,793 INFO [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(7675): creating {ENCODED => 855beb39135ca8ab514e758e736983c6, NAME => 'np1:table1,,1689743715766.855beb39135ca8ab514e758e736983c6.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='np1:table1', {NAME => 'fam1', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:39859/user/jenkins/test-data/f700d5b1-dd16-c927-b07f-be4667f3482a/.tmp 2023-07-19 05:15:15,804 DEBUG [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(866): Instantiated np1:table1,,1689743715766.855beb39135ca8ab514e758e736983c6.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-19 05:15:15,805 DEBUG [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(1604): Closing 855beb39135ca8ab514e758e736983c6, disabling compactions & flushes 2023-07-19 05:15:15,805 INFO [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(1626): Closing region np1:table1,,1689743715766.855beb39135ca8ab514e758e736983c6. 2023-07-19 05:15:15,805 DEBUG [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on np1:table1,,1689743715766.855beb39135ca8ab514e758e736983c6. 2023-07-19 05:15:15,805 DEBUG [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(1714): Acquired close lock on np1:table1,,1689743715766.855beb39135ca8ab514e758e736983c6. after waiting 0 ms 2023-07-19 05:15:15,805 DEBUG [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(1724): Updates disabled for region np1:table1,,1689743715766.855beb39135ca8ab514e758e736983c6. 2023-07-19 05:15:15,805 INFO [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(1838): Closed np1:table1,,1689743715766.855beb39135ca8ab514e758e736983c6. 2023-07-19 05:15:15,805 DEBUG [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(1558): Region close journal for 855beb39135ca8ab514e758e736983c6: 2023-07-19 05:15:15,807 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=16, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=np1:table1 execute state=CREATE_TABLE_ADD_TO_META 2023-07-19 05:15:15,808 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"np1:table1,,1689743715766.855beb39135ca8ab514e758e736983c6.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689743715808"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689743715808"}]},"ts":"1689743715808"} 2023-07-19 05:15:15,809 INFO [PEWorker-5] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-19 05:15:15,810 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=16, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=np1:table1 execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-19 05:15:15,810 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"np1:table1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689743715810"}]},"ts":"1689743715810"} 2023-07-19 05:15:15,811 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=np1:table1, state=ENABLING in hbase:meta 2023-07-19 05:15:15,813 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-19 05:15:15,814 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-19 05:15:15,814 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-19 05:15:15,814 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-19 05:15:15,814 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-19 05:15:15,814 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=17, ppid=16, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=np1:table1, region=855beb39135ca8ab514e758e736983c6, ASSIGN}] 2023-07-19 05:15:15,815 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=17, ppid=16, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=np1:table1, region=855beb39135ca8ab514e758e736983c6, ASSIGN 2023-07-19 05:15:15,815 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=17, ppid=16, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=np1:table1, region=855beb39135ca8ab514e758e736983c6, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,38277,1689743713946; forceNewPlan=false, retain=false 2023-07-19 05:15:15,871 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37145] master.MasterRpcServices(1230): Checking to see if procedure is done pid=16 2023-07-19 05:15:15,919 DEBUG [RSProcedureDispatcher-pool-2] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,38277,1689743713946 2023-07-19 05:15:15,920 DEBUG [RSProcedureDispatcher-pool-2] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-19 05:15:15,921 INFO [RS-EventLoopGroup-9-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:37094, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-19 05:15:15,926 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:quota,,1689743715555.b270606e2347483ed4a3ed3bf6ac8685. 2023-07-19 05:15:15,926 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => b270606e2347483ed4a3ed3bf6ac8685, NAME => 'hbase:quota,,1689743715555.b270606e2347483ed4a3ed3bf6ac8685.', STARTKEY => '', ENDKEY => ''} 2023-07-19 05:15:15,926 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table quota b270606e2347483ed4a3ed3bf6ac8685 2023-07-19 05:15:15,926 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:quota,,1689743715555.b270606e2347483ed4a3ed3bf6ac8685.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-19 05:15:15,926 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for b270606e2347483ed4a3ed3bf6ac8685 2023-07-19 05:15:15,926 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for b270606e2347483ed4a3ed3bf6ac8685 2023-07-19 05:15:15,927 INFO [StoreOpener-b270606e2347483ed4a3ed3bf6ac8685-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family q of region b270606e2347483ed4a3ed3bf6ac8685 2023-07-19 05:15:15,929 DEBUG [StoreOpener-b270606e2347483ed4a3ed3bf6ac8685-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:39859/user/jenkins/test-data/f700d5b1-dd16-c927-b07f-be4667f3482a/data/hbase/quota/b270606e2347483ed4a3ed3bf6ac8685/q 2023-07-19 05:15:15,929 DEBUG [StoreOpener-b270606e2347483ed4a3ed3bf6ac8685-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:39859/user/jenkins/test-data/f700d5b1-dd16-c927-b07f-be4667f3482a/data/hbase/quota/b270606e2347483ed4a3ed3bf6ac8685/q 2023-07-19 05:15:15,929 INFO [StoreOpener-b270606e2347483ed4a3ed3bf6ac8685-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region b270606e2347483ed4a3ed3bf6ac8685 columnFamilyName q 2023-07-19 05:15:15,930 INFO [StoreOpener-b270606e2347483ed4a3ed3bf6ac8685-1] regionserver.HStore(310): Store=b270606e2347483ed4a3ed3bf6ac8685/q, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-19 05:15:15,930 INFO [StoreOpener-b270606e2347483ed4a3ed3bf6ac8685-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family u of region b270606e2347483ed4a3ed3bf6ac8685 2023-07-19 05:15:15,931 DEBUG [StoreOpener-b270606e2347483ed4a3ed3bf6ac8685-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:39859/user/jenkins/test-data/f700d5b1-dd16-c927-b07f-be4667f3482a/data/hbase/quota/b270606e2347483ed4a3ed3bf6ac8685/u 2023-07-19 05:15:15,931 DEBUG [StoreOpener-b270606e2347483ed4a3ed3bf6ac8685-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:39859/user/jenkins/test-data/f700d5b1-dd16-c927-b07f-be4667f3482a/data/hbase/quota/b270606e2347483ed4a3ed3bf6ac8685/u 2023-07-19 05:15:15,931 INFO [StoreOpener-b270606e2347483ed4a3ed3bf6ac8685-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region b270606e2347483ed4a3ed3bf6ac8685 columnFamilyName u 2023-07-19 05:15:15,932 INFO [StoreOpener-b270606e2347483ed4a3ed3bf6ac8685-1] regionserver.HStore(310): Store=b270606e2347483ed4a3ed3bf6ac8685/u, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-19 05:15:15,932 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:39859/user/jenkins/test-data/f700d5b1-dd16-c927-b07f-be4667f3482a/data/hbase/quota/b270606e2347483ed4a3ed3bf6ac8685 2023-07-19 05:15:15,933 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:39859/user/jenkins/test-data/f700d5b1-dd16-c927-b07f-be4667f3482a/data/hbase/quota/b270606e2347483ed4a3ed3bf6ac8685 2023-07-19 05:15:15,934 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:quota descriptor;using region.getMemStoreFlushHeapSize/# of families (64.0 M)) instead. 2023-07-19 05:15:15,935 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for b270606e2347483ed4a3ed3bf6ac8685 2023-07-19 05:15:15,938 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:39859/user/jenkins/test-data/f700d5b1-dd16-c927-b07f-be4667f3482a/data/hbase/quota/b270606e2347483ed4a3ed3bf6ac8685/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-19 05:15:15,939 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened b270606e2347483ed4a3ed3bf6ac8685; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11009747360, jitterRate=0.02536262571811676}}}, FlushLargeStoresPolicy{flushSizeLowerBound=67108864} 2023-07-19 05:15:15,939 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for b270606e2347483ed4a3ed3bf6ac8685: 2023-07-19 05:15:15,939 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:quota,,1689743715555.b270606e2347483ed4a3ed3bf6ac8685., pid=15, masterSystemTime=1689743715919 2023-07-19 05:15:15,942 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:quota,,1689743715555.b270606e2347483ed4a3ed3bf6ac8685. 2023-07-19 05:15:15,943 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:quota,,1689743715555.b270606e2347483ed4a3ed3bf6ac8685. 2023-07-19 05:15:15,943 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=13 updating hbase:meta row=b270606e2347483ed4a3ed3bf6ac8685, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,38277,1689743713946 2023-07-19 05:15:15,944 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:quota,,1689743715555.b270606e2347483ed4a3ed3bf6ac8685.","families":{"info":[{"qualifier":"regioninfo","vlen":37,"tag":[],"timestamp":"1689743715943"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689743715943"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689743715943"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689743715943"}]},"ts":"1689743715943"} 2023-07-19 05:15:15,946 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=15, resume processing ppid=13 2023-07-19 05:15:15,946 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=15, ppid=13, state=SUCCESS; OpenRegionProcedure b270606e2347483ed4a3ed3bf6ac8685, server=jenkins-hbase4.apache.org,38277,1689743713946 in 179 msec 2023-07-19 05:15:15,948 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=13, resume processing ppid=12 2023-07-19 05:15:15,948 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=13, ppid=12, state=SUCCESS; TransitRegionStateProcedure table=hbase:quota, region=b270606e2347483ed4a3ed3bf6ac8685, ASSIGN in 335 msec 2023-07-19 05:15:15,948 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:quota execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-19 05:15:15,949 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:quota","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689743715949"}]},"ts":"1689743715949"} 2023-07-19 05:15:15,950 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=hbase:quota, state=ENABLED in hbase:meta 2023-07-19 05:15:15,952 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:quota execute state=CREATE_TABLE_POST_OPERATION 2023-07-19 05:15:15,954 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=12, state=SUCCESS; CreateTableProcedure table=hbase:quota in 396 msec 2023-07-19 05:15:15,965 INFO [jenkins-hbase4:37145] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-19 05:15:15,967 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=17 updating hbase:meta row=855beb39135ca8ab514e758e736983c6, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,38277,1689743713946 2023-07-19 05:15:15,967 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"np1:table1,,1689743715766.855beb39135ca8ab514e758e736983c6.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689743715967"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689743715967"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689743715967"}]},"ts":"1689743715967"} 2023-07-19 05:15:15,968 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=18, ppid=17, state=RUNNABLE; OpenRegionProcedure 855beb39135ca8ab514e758e736983c6, server=jenkins-hbase4.apache.org,38277,1689743713946}] 2023-07-19 05:15:16,073 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37145] master.MasterRpcServices(1230): Checking to see if procedure is done pid=16 2023-07-19 05:15:16,123 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open np1:table1,,1689743715766.855beb39135ca8ab514e758e736983c6. 2023-07-19 05:15:16,124 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 855beb39135ca8ab514e758e736983c6, NAME => 'np1:table1,,1689743715766.855beb39135ca8ab514e758e736983c6.', STARTKEY => '', ENDKEY => ''} 2023-07-19 05:15:16,124 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table table1 855beb39135ca8ab514e758e736983c6 2023-07-19 05:15:16,124 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated np1:table1,,1689743715766.855beb39135ca8ab514e758e736983c6.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-19 05:15:16,124 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 855beb39135ca8ab514e758e736983c6 2023-07-19 05:15:16,124 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 855beb39135ca8ab514e758e736983c6 2023-07-19 05:15:16,125 INFO [StoreOpener-855beb39135ca8ab514e758e736983c6-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family fam1 of region 855beb39135ca8ab514e758e736983c6 2023-07-19 05:15:16,127 DEBUG [StoreOpener-855beb39135ca8ab514e758e736983c6-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:39859/user/jenkins/test-data/f700d5b1-dd16-c927-b07f-be4667f3482a/data/np1/table1/855beb39135ca8ab514e758e736983c6/fam1 2023-07-19 05:15:16,127 DEBUG [StoreOpener-855beb39135ca8ab514e758e736983c6-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:39859/user/jenkins/test-data/f700d5b1-dd16-c927-b07f-be4667f3482a/data/np1/table1/855beb39135ca8ab514e758e736983c6/fam1 2023-07-19 05:15:16,128 INFO [StoreOpener-855beb39135ca8ab514e758e736983c6-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 855beb39135ca8ab514e758e736983c6 columnFamilyName fam1 2023-07-19 05:15:16,128 INFO [StoreOpener-855beb39135ca8ab514e758e736983c6-1] regionserver.HStore(310): Store=855beb39135ca8ab514e758e736983c6/fam1, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-19 05:15:16,129 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:39859/user/jenkins/test-data/f700d5b1-dd16-c927-b07f-be4667f3482a/data/np1/table1/855beb39135ca8ab514e758e736983c6 2023-07-19 05:15:16,129 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:39859/user/jenkins/test-data/f700d5b1-dd16-c927-b07f-be4667f3482a/data/np1/table1/855beb39135ca8ab514e758e736983c6 2023-07-19 05:15:16,133 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 855beb39135ca8ab514e758e736983c6 2023-07-19 05:15:16,135 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:39859/user/jenkins/test-data/f700d5b1-dd16-c927-b07f-be4667f3482a/data/np1/table1/855beb39135ca8ab514e758e736983c6/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-19 05:15:16,135 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 855beb39135ca8ab514e758e736983c6; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9829695520, jitterRate=-0.08453826606273651}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-19 05:15:16,135 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 855beb39135ca8ab514e758e736983c6: 2023-07-19 05:15:16,136 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for np1:table1,,1689743715766.855beb39135ca8ab514e758e736983c6., pid=18, masterSystemTime=1689743716119 2023-07-19 05:15:16,138 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for np1:table1,,1689743715766.855beb39135ca8ab514e758e736983c6. 2023-07-19 05:15:16,138 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened np1:table1,,1689743715766.855beb39135ca8ab514e758e736983c6. 2023-07-19 05:15:16,138 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=17 updating hbase:meta row=855beb39135ca8ab514e758e736983c6, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,38277,1689743713946 2023-07-19 05:15:16,138 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"np1:table1,,1689743715766.855beb39135ca8ab514e758e736983c6.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689743716138"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689743716138"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689743716138"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689743716138"}]},"ts":"1689743716138"} 2023-07-19 05:15:16,141 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=18, resume processing ppid=17 2023-07-19 05:15:16,141 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=18, ppid=17, state=SUCCESS; OpenRegionProcedure 855beb39135ca8ab514e758e736983c6, server=jenkins-hbase4.apache.org,38277,1689743713946 in 172 msec 2023-07-19 05:15:16,143 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=17, resume processing ppid=16 2023-07-19 05:15:16,143 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=17, ppid=16, state=SUCCESS; TransitRegionStateProcedure table=np1:table1, region=855beb39135ca8ab514e758e736983c6, ASSIGN in 327 msec 2023-07-19 05:15:16,144 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=16, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=np1:table1 execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-19 05:15:16,145 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"np1:table1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689743716145"}]},"ts":"1689743716145"} 2023-07-19 05:15:16,146 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=np1:table1, state=ENABLED in hbase:meta 2023-07-19 05:15:16,149 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=16, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=np1:table1 execute state=CREATE_TABLE_POST_OPERATION 2023-07-19 05:15:16,150 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=16, state=SUCCESS; CreateTableProcedure table=np1:table1 in 383 msec 2023-07-19 05:15:16,374 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37145] master.MasterRpcServices(1230): Checking to see if procedure is done pid=16 2023-07-19 05:15:16,374 INFO [Listener at localhost/43345] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: np1:table1, procId: 16 completed 2023-07-19 05:15:16,375 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37145] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'np1:table2', {NAME => 'fam1', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-19 05:15:16,376 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37145] procedure2.ProcedureExecutor(1029): Stored pid=19, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=np1:table2 2023-07-19 05:15:16,378 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=19, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=np1:table2 execute state=CREATE_TABLE_PRE_OPERATION 2023-07-19 05:15:16,379 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37145] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "np1" qualifier: "table2" procId is: 19 2023-07-19 05:15:16,379 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37145] master.MasterRpcServices(1230): Checking to see if procedure is done pid=19 2023-07-19 05:15:16,408 DEBUG [PEWorker-5] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-19 05:15:16,410 INFO [RS-EventLoopGroup-9-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:37108, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-19 05:15:16,415 INFO [PEWorker-5] procedure2.ProcedureExecutor(1528): Rolled back pid=19, state=ROLLEDBACK, exception=org.apache.hadoop.hbase.quotas.QuotaExceededException via master-create-table:org.apache.hadoop.hbase.quotas.QuotaExceededException: The table np1:table2 is not allowed to have 6 regions. The total number of regions permitted is only 5, while current region count is 1. This may be transient, please retry later if there are any ongoing split operations in the namespace.; CreateTableProcedure table=np1:table2 exec-time=38 msec 2023-07-19 05:15:16,480 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37145] master.MasterRpcServices(1230): Checking to see if procedure is done pid=19 2023-07-19 05:15:16,483 INFO [Listener at localhost/43345] client.HBaseAdmin$TableFuture(3548): Operation: CREATE, Table Name: np1:table2, procId: 19 failed with The table np1:table2 is not allowed to have 6 regions. The total number of regions permitted is only 5, while current region count is 1. This may be transient, please retry later if there are any ongoing split operations in the namespace. 2023-07-19 05:15:16,484 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37145] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-19 05:15:16,485 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37145] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-19 05:15:16,486 INFO [Listener at localhost/43345] client.HBaseAdmin$15(890): Started disable of np1:table1 2023-07-19 05:15:16,486 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37145] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable np1:table1 2023-07-19 05:15:16,487 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37145] procedure2.ProcedureExecutor(1029): Stored pid=20, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=np1:table1 2023-07-19 05:15:16,489 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37145] master.MasterRpcServices(1230): Checking to see if procedure is done pid=20 2023-07-19 05:15:16,489 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"np1:table1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689743716489"}]},"ts":"1689743716489"} 2023-07-19 05:15:16,491 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=np1:table1, state=DISABLING in hbase:meta 2023-07-19 05:15:16,492 INFO [PEWorker-4] procedure.DisableTableProcedure(293): Set np1:table1 to state=DISABLING 2023-07-19 05:15:16,493 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=21, ppid=20, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=np1:table1, region=855beb39135ca8ab514e758e736983c6, UNASSIGN}] 2023-07-19 05:15:16,494 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=21, ppid=20, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=np1:table1, region=855beb39135ca8ab514e758e736983c6, UNASSIGN 2023-07-19 05:15:16,494 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=21 updating hbase:meta row=855beb39135ca8ab514e758e736983c6, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,38277,1689743713946 2023-07-19 05:15:16,494 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"np1:table1,,1689743715766.855beb39135ca8ab514e758e736983c6.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689743716494"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689743716494"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689743716494"}]},"ts":"1689743716494"} 2023-07-19 05:15:16,496 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=22, ppid=21, state=RUNNABLE; CloseRegionProcedure 855beb39135ca8ab514e758e736983c6, server=jenkins-hbase4.apache.org,38277,1689743713946}] 2023-07-19 05:15:16,590 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37145] master.MasterRpcServices(1230): Checking to see if procedure is done pid=20 2023-07-19 05:15:16,659 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 855beb39135ca8ab514e758e736983c6 2023-07-19 05:15:16,660 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 855beb39135ca8ab514e758e736983c6, disabling compactions & flushes 2023-07-19 05:15:16,661 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region np1:table1,,1689743715766.855beb39135ca8ab514e758e736983c6. 2023-07-19 05:15:16,661 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on np1:table1,,1689743715766.855beb39135ca8ab514e758e736983c6. 2023-07-19 05:15:16,661 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on np1:table1,,1689743715766.855beb39135ca8ab514e758e736983c6. after waiting 0 ms 2023-07-19 05:15:16,661 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region np1:table1,,1689743715766.855beb39135ca8ab514e758e736983c6. 2023-07-19 05:15:16,664 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:39859/user/jenkins/test-data/f700d5b1-dd16-c927-b07f-be4667f3482a/data/np1/table1/855beb39135ca8ab514e758e736983c6/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-19 05:15:16,665 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed np1:table1,,1689743715766.855beb39135ca8ab514e758e736983c6. 2023-07-19 05:15:16,665 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 855beb39135ca8ab514e758e736983c6: 2023-07-19 05:15:16,666 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 855beb39135ca8ab514e758e736983c6 2023-07-19 05:15:16,667 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=21 updating hbase:meta row=855beb39135ca8ab514e758e736983c6, regionState=CLOSED 2023-07-19 05:15:16,667 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"np1:table1,,1689743715766.855beb39135ca8ab514e758e736983c6.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689743716667"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689743716667"}]},"ts":"1689743716667"} 2023-07-19 05:15:16,669 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=22, resume processing ppid=21 2023-07-19 05:15:16,669 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=22, ppid=21, state=SUCCESS; CloseRegionProcedure 855beb39135ca8ab514e758e736983c6, server=jenkins-hbase4.apache.org,38277,1689743713946 in 172 msec 2023-07-19 05:15:16,671 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=21, resume processing ppid=20 2023-07-19 05:15:16,671 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=21, ppid=20, state=SUCCESS; TransitRegionStateProcedure table=np1:table1, region=855beb39135ca8ab514e758e736983c6, UNASSIGN in 176 msec 2023-07-19 05:15:16,671 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"np1:table1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689743716671"}]},"ts":"1689743716671"} 2023-07-19 05:15:16,672 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=np1:table1, state=DISABLED in hbase:meta 2023-07-19 05:15:16,674 INFO [PEWorker-4] procedure.DisableTableProcedure(305): Set np1:table1 to state=DISABLED 2023-07-19 05:15:16,675 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=20, state=SUCCESS; DisableTableProcedure table=np1:table1 in 188 msec 2023-07-19 05:15:16,791 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37145] master.MasterRpcServices(1230): Checking to see if procedure is done pid=20 2023-07-19 05:15:16,792 INFO [Listener at localhost/43345] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: np1:table1, procId: 20 completed 2023-07-19 05:15:16,792 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37145] master.HMaster$5(2228): Client=jenkins//172.31.14.131 delete np1:table1 2023-07-19 05:15:16,793 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37145] procedure2.ProcedureExecutor(1029): Stored pid=23, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=np1:table1 2023-07-19 05:15:16,795 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=23, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=np1:table1 2023-07-19 05:15:16,795 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37145] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 'np1:table1' from rsgroup 'default' 2023-07-19 05:15:16,796 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=23, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=np1:table1 2023-07-19 05:15:16,797 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37145] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 05:15:16,797 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37145] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-19 05:15:16,799 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:39859/user/jenkins/test-data/f700d5b1-dd16-c927-b07f-be4667f3482a/.tmp/data/np1/table1/855beb39135ca8ab514e758e736983c6 2023-07-19 05:15:16,801 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37145] master.MasterRpcServices(1230): Checking to see if procedure is done pid=23 2023-07-19 05:15:16,801 DEBUG [HFileArchiver-2] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:39859/user/jenkins/test-data/f700d5b1-dd16-c927-b07f-be4667f3482a/.tmp/data/np1/table1/855beb39135ca8ab514e758e736983c6/fam1, FileablePath, hdfs://localhost:39859/user/jenkins/test-data/f700d5b1-dd16-c927-b07f-be4667f3482a/.tmp/data/np1/table1/855beb39135ca8ab514e758e736983c6/recovered.edits] 2023-07-19 05:15:16,806 DEBUG [HFileArchiver-2] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:39859/user/jenkins/test-data/f700d5b1-dd16-c927-b07f-be4667f3482a/.tmp/data/np1/table1/855beb39135ca8ab514e758e736983c6/recovered.edits/4.seqid to hdfs://localhost:39859/user/jenkins/test-data/f700d5b1-dd16-c927-b07f-be4667f3482a/archive/data/np1/table1/855beb39135ca8ab514e758e736983c6/recovered.edits/4.seqid 2023-07-19 05:15:16,807 DEBUG [HFileArchiver-2] backup.HFileArchiver(596): Deleted hdfs://localhost:39859/user/jenkins/test-data/f700d5b1-dd16-c927-b07f-be4667f3482a/.tmp/data/np1/table1/855beb39135ca8ab514e758e736983c6 2023-07-19 05:15:16,807 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(328): Archived np1:table1 regions 2023-07-19 05:15:16,809 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=23, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=np1:table1 2023-07-19 05:15:16,811 WARN [PEWorker-1] procedure.DeleteTableProcedure(384): Deleting some vestigial 1 rows of np1:table1 from hbase:meta 2023-07-19 05:15:16,812 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(421): Removing 'np1:table1' descriptor. 2023-07-19 05:15:16,814 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=23, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=np1:table1 2023-07-19 05:15:16,814 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(411): Removing 'np1:table1' from region states. 2023-07-19 05:15:16,814 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"np1:table1,,1689743715766.855beb39135ca8ab514e758e736983c6.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689743716814"}]},"ts":"9223372036854775807"} 2023-07-19 05:15:16,815 INFO [PEWorker-1] hbase.MetaTableAccessor(1788): Deleted 1 regions from META 2023-07-19 05:15:16,815 DEBUG [PEWorker-1] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => 855beb39135ca8ab514e758e736983c6, NAME => 'np1:table1,,1689743715766.855beb39135ca8ab514e758e736983c6.', STARTKEY => '', ENDKEY => ''}] 2023-07-19 05:15:16,815 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(415): Marking 'np1:table1' as deleted. 2023-07-19 05:15:16,815 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"np1:table1","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1689743716815"}]},"ts":"9223372036854775807"} 2023-07-19 05:15:16,817 INFO [PEWorker-1] hbase.MetaTableAccessor(1658): Deleted table np1:table1 state from META 2023-07-19 05:15:16,821 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(130): Finished pid=23, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=np1:table1 2023-07-19 05:15:16,822 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=23, state=SUCCESS; DeleteTableProcedure table=np1:table1 in 29 msec 2023-07-19 05:15:16,902 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37145] master.MasterRpcServices(1230): Checking to see if procedure is done pid=23 2023-07-19 05:15:16,903 INFO [Listener at localhost/43345] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: np1:table1, procId: 23 completed 2023-07-19 05:15:16,905 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-datanode.properties,hadoop-metrics2.properties 2023-07-19 05:15:16,907 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37145] master.HMaster$17(3086): Client=jenkins//172.31.14.131 delete np1 2023-07-19 05:15:16,916 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37145] procedure2.ProcedureExecutor(1029): Stored pid=24, state=RUNNABLE:DELETE_NAMESPACE_PREPARE; DeleteNamespaceProcedure, namespace=np1 2023-07-19 05:15:16,917 INFO [PEWorker-2] procedure.DeleteNamespaceProcedure(73): pid=24, state=RUNNABLE:DELETE_NAMESPACE_PREPARE, locked=true; DeleteNamespaceProcedure, namespace=np1 2023-07-19 05:15:16,920 INFO [PEWorker-2] procedure.DeleteNamespaceProcedure(73): pid=24, state=RUNNABLE:DELETE_NAMESPACE_DELETE_FROM_NS_TABLE, locked=true; DeleteNamespaceProcedure, namespace=np1 2023-07-19 05:15:16,927 INFO [PEWorker-2] procedure.DeleteNamespaceProcedure(73): pid=24, state=RUNNABLE:DELETE_NAMESPACE_REMOVE_FROM_ZK, locked=true; DeleteNamespaceProcedure, namespace=np1 2023-07-19 05:15:16,929 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37145] master.MasterRpcServices(1230): Checking to see if procedure is done pid=24 2023-07-19 05:15:16,930 DEBUG [Listener at localhost/43345-EventThread] zookeeper.ZKWatcher(600): master:37145-0x1017c0168c00000, quorum=127.0.0.1:58776, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/namespace/np1 2023-07-19 05:15:16,930 DEBUG [Listener at localhost/43345-EventThread] zookeeper.ZKWatcher(600): master:37145-0x1017c0168c00000, quorum=127.0.0.1:58776, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-19 05:15:16,931 INFO [PEWorker-2] procedure.DeleteNamespaceProcedure(73): pid=24, state=RUNNABLE:DELETE_NAMESPACE_DELETE_DIRECTORIES, locked=true; DeleteNamespaceProcedure, namespace=np1 2023-07-19 05:15:16,935 INFO [PEWorker-2] procedure.DeleteNamespaceProcedure(73): pid=24, state=RUNNABLE:DELETE_NAMESPACE_REMOVE_NAMESPACE_QUOTA, locked=true; DeleteNamespaceProcedure, namespace=np1 2023-07-19 05:15:16,936 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=24, state=SUCCESS; DeleteNamespaceProcedure, namespace=np1 in 28 msec 2023-07-19 05:15:17,030 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37145] master.MasterRpcServices(1230): Checking to see if procedure is done pid=24 2023-07-19 05:15:17,030 INFO [Listener at localhost/43345] hbase.HBaseTestingUtility(1286): Shutting down minicluster 2023-07-19 05:15:17,031 INFO [Listener at localhost/43345] client.ConnectionImplementation(1979): Closing master protocol: MasterService 2023-07-19 05:15:17,031 DEBUG [Listener at localhost/43345] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x7c5dfa12 to 127.0.0.1:58776 2023-07-19 05:15:17,031 DEBUG [Listener at localhost/43345] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-19 05:15:17,031 DEBUG [Listener at localhost/43345] util.JVMClusterUtil(237): Shutting down HBase Cluster 2023-07-19 05:15:17,031 DEBUG [Listener at localhost/43345] util.JVMClusterUtil(257): Found active master hash=1281494785, stopped=false 2023-07-19 05:15:17,031 DEBUG [Listener at localhost/43345] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint 2023-07-19 05:15:17,031 DEBUG [Listener at localhost/43345] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver 2023-07-19 05:15:17,031 DEBUG [Listener at localhost/43345] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.quotas.MasterQuotasObserver 2023-07-19 05:15:17,032 INFO [Listener at localhost/43345] master.ServerManager(901): Cluster shutdown requested of master=jenkins-hbase4.apache.org,37145,1689743713754 2023-07-19 05:15:17,033 DEBUG [Listener at localhost/43345-EventThread] zookeeper.ZKWatcher(600): regionserver:45187-0x1017c0168c00003, quorum=127.0.0.1:58776, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-19 05:15:17,033 DEBUG [Listener at localhost/43345-EventThread] zookeeper.ZKWatcher(600): regionserver:37135-0x1017c0168c00002, quorum=127.0.0.1:58776, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-19 05:15:17,033 INFO [Listener at localhost/43345] procedure2.ProcedureExecutor(629): Stopping 2023-07-19 05:15:17,033 DEBUG [Listener at localhost/43345-EventThread] zookeeper.ZKWatcher(600): master:37145-0x1017c0168c00000, quorum=127.0.0.1:58776, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-19 05:15:17,033 DEBUG [Listener at localhost/43345-EventThread] zookeeper.ZKWatcher(600): regionserver:38277-0x1017c0168c00001, quorum=127.0.0.1:58776, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-19 05:15:17,033 DEBUG [Listener at localhost/43345-EventThread] zookeeper.ZKWatcher(600): master:37145-0x1017c0168c00000, quorum=127.0.0.1:58776, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-19 05:15:17,034 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:37135-0x1017c0168c00002, quorum=127.0.0.1:58776, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-19 05:15:17,035 DEBUG [Listener at localhost/43345] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x7a7ac115 to 127.0.0.1:58776 2023-07-19 05:15:17,035 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:45187-0x1017c0168c00003, quorum=127.0.0.1:58776, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-19 05:15:17,035 DEBUG [Listener at localhost/43345] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-19 05:15:17,035 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:37145-0x1017c0168c00000, quorum=127.0.0.1:58776, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-19 05:15:17,035 INFO [Listener at localhost/43345] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,38277,1689743713946' ***** 2023-07-19 05:15:17,035 INFO [Listener at localhost/43345] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-19 05:15:17,036 INFO [Listener at localhost/43345] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,37135,1689743714109' ***** 2023-07-19 05:15:17,036 INFO [Listener at localhost/43345] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-19 05:15:17,036 INFO [Listener at localhost/43345] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,45187,1689743714265' ***** 2023-07-19 05:15:17,036 INFO [Listener at localhost/43345] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-19 05:15:17,036 INFO [RS:2;jenkins-hbase4:45187] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-19 05:15:17,035 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:38277-0x1017c0168c00001, quorum=127.0.0.1:58776, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-19 05:15:17,036 INFO [RS:0;jenkins-hbase4:38277] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-19 05:15:17,036 INFO [RS:1;jenkins-hbase4:37135] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-19 05:15:17,048 INFO [RS:1;jenkins-hbase4:37135] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@4c4e4298{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-19 05:15:17,048 INFO [RS:2;jenkins-hbase4:45187] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@64b278aa{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-19 05:15:17,048 INFO [RS:0;jenkins-hbase4:38277] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@27061e18{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-19 05:15:17,049 INFO [RS:1;jenkins-hbase4:37135] server.AbstractConnector(383): Stopped ServerConnector@2f107ff7{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-19 05:15:17,049 INFO [RS:0;jenkins-hbase4:38277] server.AbstractConnector(383): Stopped ServerConnector@619d9d61{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-19 05:15:17,049 INFO [RS:2;jenkins-hbase4:45187] server.AbstractConnector(383): Stopped ServerConnector@7596ce29{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-19 05:15:17,049 INFO [RS:0;jenkins-hbase4:38277] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-19 05:15:17,049 INFO [RS:1;jenkins-hbase4:37135] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-19 05:15:17,049 INFO [RS:2;jenkins-hbase4:45187] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-19 05:15:17,050 INFO [RS:0;jenkins-hbase4:38277] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@2d8a119b{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-19 05:15:17,052 INFO [RS:2;jenkins-hbase4:45187] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@7fb867a7{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-19 05:15:17,052 INFO [RS:1;jenkins-hbase4:37135] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@3de75ac7{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-19 05:15:17,052 INFO [RS:2;jenkins-hbase4:45187] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@53ecbb85{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/29889217-1cb1-ea8b-5ca7-d011d0feb9b8/hadoop.log.dir/,STOPPED} 2023-07-19 05:15:17,052 INFO [RS:0;jenkins-hbase4:38277] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@ea82fff{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/29889217-1cb1-ea8b-5ca7-d011d0feb9b8/hadoop.log.dir/,STOPPED} 2023-07-19 05:15:17,052 INFO [RS:1;jenkins-hbase4:37135] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@35eb1866{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/29889217-1cb1-ea8b-5ca7-d011d0feb9b8/hadoop.log.dir/,STOPPED} 2023-07-19 05:15:17,053 INFO [RS:0;jenkins-hbase4:38277] regionserver.HeapMemoryManager(220): Stopping 2023-07-19 05:15:17,053 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-19 05:15:17,053 INFO [RS:0;jenkins-hbase4:38277] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-19 05:15:17,053 INFO [RS:1;jenkins-hbase4:37135] regionserver.HeapMemoryManager(220): Stopping 2023-07-19 05:15:17,054 INFO [RS:0;jenkins-hbase4:38277] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-19 05:15:17,054 INFO [RS:1;jenkins-hbase4:37135] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-19 05:15:17,054 INFO [RS:0;jenkins-hbase4:38277] regionserver.HRegionServer(3305): Received CLOSE for b270606e2347483ed4a3ed3bf6ac8685 2023-07-19 05:15:17,054 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-19 05:15:17,054 INFO [RS:2;jenkins-hbase4:45187] regionserver.HeapMemoryManager(220): Stopping 2023-07-19 05:15:17,054 INFO [RS:1;jenkins-hbase4:37135] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-19 05:15:17,055 INFO [RS:2;jenkins-hbase4:45187] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-19 05:15:17,055 INFO [RS:1;jenkins-hbase4:37135] regionserver.HRegionServer(3305): Received CLOSE for 0eea252164c18f46cb8ed0e29a81f112 2023-07-19 05:15:17,054 INFO [RS:0;jenkins-hbase4:38277] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,38277,1689743713946 2023-07-19 05:15:17,055 INFO [RS:2;jenkins-hbase4:45187] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-19 05:15:17,055 DEBUG [RS:0;jenkins-hbase4:38277] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x6054239f to 127.0.0.1:58776 2023-07-19 05:15:17,055 INFO [RS:1;jenkins-hbase4:37135] regionserver.HRegionServer(3305): Received CLOSE for f43c0f30126da755511ab46ca11cb56e 2023-07-19 05:15:17,055 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-19 05:15:17,055 INFO [RS:1;jenkins-hbase4:37135] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,37135,1689743714109 2023-07-19 05:15:17,055 DEBUG [RS:0;jenkins-hbase4:38277] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-19 05:15:17,055 DEBUG [RS:1;jenkins-hbase4:37135] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x604ab1ea to 127.0.0.1:58776 2023-07-19 05:15:17,055 INFO [RS:2;jenkins-hbase4:45187] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,45187,1689743714265 2023-07-19 05:15:17,055 DEBUG [RS:1;jenkins-hbase4:37135] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-19 05:15:17,055 INFO [RS:0;jenkins-hbase4:38277] regionserver.HRegionServer(1474): Waiting on 1 regions to close 2023-07-19 05:15:17,056 INFO [RS:1;jenkins-hbase4:37135] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-19 05:15:17,056 INFO [RS:1;jenkins-hbase4:37135] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-19 05:15:17,056 INFO [RS:1;jenkins-hbase4:37135] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-19 05:15:17,056 DEBUG [RS:2;jenkins-hbase4:45187] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x16ba9192 to 127.0.0.1:58776 2023-07-19 05:15:17,056 INFO [RS:1;jenkins-hbase4:37135] regionserver.HRegionServer(3305): Received CLOSE for 1588230740 2023-07-19 05:15:17,056 DEBUG [RS:0;jenkins-hbase4:38277] regionserver.HRegionServer(1478): Online Regions={b270606e2347483ed4a3ed3bf6ac8685=hbase:quota,,1689743715555.b270606e2347483ed4a3ed3bf6ac8685.} 2023-07-19 05:15:17,056 DEBUG [RS:2;jenkins-hbase4:45187] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-19 05:15:17,056 INFO [RS:2;jenkins-hbase4:45187] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,45187,1689743714265; all regions closed. 2023-07-19 05:15:17,056 DEBUG [RS:0;jenkins-hbase4:38277] regionserver.HRegionServer(1504): Waiting on b270606e2347483ed4a3ed3bf6ac8685 2023-07-19 05:15:17,056 DEBUG [RS:2;jenkins-hbase4:45187] quotas.QuotaCache(100): Stopping QuotaRefresherChore chore. 2023-07-19 05:15:17,056 INFO [RS:1;jenkins-hbase4:37135] regionserver.HRegionServer(1474): Waiting on 3 regions to close 2023-07-19 05:15:17,056 DEBUG [RS:1;jenkins-hbase4:37135] regionserver.HRegionServer(1478): Online Regions={0eea252164c18f46cb8ed0e29a81f112=hbase:namespace,,1689743715108.0eea252164c18f46cb8ed0e29a81f112., 1588230740=hbase:meta,,1.1588230740, f43c0f30126da755511ab46ca11cb56e=hbase:rsgroup,,1689743715247.f43c0f30126da755511ab46ca11cb56e.} 2023-07-19 05:15:17,056 DEBUG [RS:1;jenkins-hbase4:37135] regionserver.HRegionServer(1504): Waiting on 0eea252164c18f46cb8ed0e29a81f112, 1588230740, f43c0f30126da755511ab46ca11cb56e 2023-07-19 05:15:17,065 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing b270606e2347483ed4a3ed3bf6ac8685, disabling compactions & flushes 2023-07-19 05:15:17,065 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-19 05:15:17,065 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 0eea252164c18f46cb8ed0e29a81f112, disabling compactions & flushes 2023-07-19 05:15:17,065 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-19 05:15:17,065 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:quota,,1689743715555.b270606e2347483ed4a3ed3bf6ac8685. 2023-07-19 05:15:17,065 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-19 05:15:17,065 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1689743715108.0eea252164c18f46cb8ed0e29a81f112. 2023-07-19 05:15:17,066 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-19 05:15:17,066 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:quota,,1689743715555.b270606e2347483ed4a3ed3bf6ac8685. 2023-07-19 05:15:17,066 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-19 05:15:17,066 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1689743715108.0eea252164c18f46cb8ed0e29a81f112. 2023-07-19 05:15:17,066 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=5.89 KB heapSize=11.09 KB 2023-07-19 05:15:17,066 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:quota,,1689743715555.b270606e2347483ed4a3ed3bf6ac8685. after waiting 0 ms 2023-07-19 05:15:17,066 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:quota,,1689743715555.b270606e2347483ed4a3ed3bf6ac8685. 2023-07-19 05:15:17,066 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1689743715108.0eea252164c18f46cb8ed0e29a81f112. after waiting 0 ms 2023-07-19 05:15:17,066 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1689743715108.0eea252164c18f46cb8ed0e29a81f112. 2023-07-19 05:15:17,066 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 0eea252164c18f46cb8ed0e29a81f112 1/1 column families, dataSize=215 B heapSize=776 B 2023-07-19 05:15:17,073 DEBUG [RS:2;jenkins-hbase4:45187] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/f700d5b1-dd16-c927-b07f-be4667f3482a/oldWALs 2023-07-19 05:15:17,073 INFO [RS:2;jenkins-hbase4:45187] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C45187%2C1689743714265:(num 1689743714960) 2023-07-19 05:15:17,073 DEBUG [RS:2;jenkins-hbase4:45187] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-19 05:15:17,073 INFO [RS:2;jenkins-hbase4:45187] regionserver.LeaseManager(133): Closed leases 2023-07-19 05:15:17,074 INFO [RS:2;jenkins-hbase4:45187] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-07-19 05:15:17,074 INFO [RS:2;jenkins-hbase4:45187] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-19 05:15:17,074 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-19 05:15:17,074 INFO [RS:2;jenkins-hbase4:45187] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-19 05:15:17,074 INFO [RS:2;jenkins-hbase4:45187] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-19 05:15:17,074 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:39859/user/jenkins/test-data/f700d5b1-dd16-c927-b07f-be4667f3482a/data/hbase/quota/b270606e2347483ed4a3ed3bf6ac8685/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-19 05:15:17,076 INFO [RS:2;jenkins-hbase4:45187] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:45187 2023-07-19 05:15:17,076 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:quota,,1689743715555.b270606e2347483ed4a3ed3bf6ac8685. 2023-07-19 05:15:17,076 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for b270606e2347483ed4a3ed3bf6ac8685: 2023-07-19 05:15:17,076 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:quota,,1689743715555.b270606e2347483ed4a3ed3bf6ac8685. 2023-07-19 05:15:17,079 DEBUG [Listener at localhost/43345-EventThread] zookeeper.ZKWatcher(600): regionserver:45187-0x1017c0168c00003, quorum=127.0.0.1:58776, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,45187,1689743714265 2023-07-19 05:15:17,079 DEBUG [Listener at localhost/43345-EventThread] zookeeper.ZKWatcher(600): regionserver:38277-0x1017c0168c00001, quorum=127.0.0.1:58776, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,45187,1689743714265 2023-07-19 05:15:17,080 DEBUG [Listener at localhost/43345-EventThread] zookeeper.ZKWatcher(600): regionserver:37135-0x1017c0168c00002, quorum=127.0.0.1:58776, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,45187,1689743714265 2023-07-19 05:15:17,079 DEBUG [Listener at localhost/43345-EventThread] zookeeper.ZKWatcher(600): regionserver:45187-0x1017c0168c00003, quorum=127.0.0.1:58776, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-19 05:15:17,079 DEBUG [Listener at localhost/43345-EventThread] zookeeper.ZKWatcher(600): master:37145-0x1017c0168c00000, quorum=127.0.0.1:58776, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-19 05:15:17,080 DEBUG [Listener at localhost/43345-EventThread] zookeeper.ZKWatcher(600): regionserver:37135-0x1017c0168c00002, quorum=127.0.0.1:58776, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-19 05:15:17,080 DEBUG [Listener at localhost/43345-EventThread] zookeeper.ZKWatcher(600): regionserver:38277-0x1017c0168c00001, quorum=127.0.0.1:58776, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-19 05:15:17,084 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,45187,1689743714265] 2023-07-19 05:15:17,084 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,45187,1689743714265; numProcessing=1 2023-07-19 05:15:17,085 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,45187,1689743714265 already deleted, retry=false 2023-07-19 05:15:17,085 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,45187,1689743714265 expired; onlineServers=2 2023-07-19 05:15:17,094 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=5.26 KB at sequenceid=31 (bloomFilter=false), to=hdfs://localhost:39859/user/jenkins/test-data/f700d5b1-dd16-c927-b07f-be4667f3482a/data/hbase/meta/1588230740/.tmp/info/a6794692a6c640299862f09db4ec7a89 2023-07-19 05:15:17,096 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=215 B at sequenceid=8 (bloomFilter=true), to=hdfs://localhost:39859/user/jenkins/test-data/f700d5b1-dd16-c927-b07f-be4667f3482a/data/hbase/namespace/0eea252164c18f46cb8ed0e29a81f112/.tmp/info/c734a1dea252426683ecceb73592f120 2023-07-19 05:15:17,103 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for a6794692a6c640299862f09db4ec7a89 2023-07-19 05:15:17,104 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for c734a1dea252426683ecceb73592f120 2023-07-19 05:15:17,105 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:39859/user/jenkins/test-data/f700d5b1-dd16-c927-b07f-be4667f3482a/data/hbase/namespace/0eea252164c18f46cb8ed0e29a81f112/.tmp/info/c734a1dea252426683ecceb73592f120 as hdfs://localhost:39859/user/jenkins/test-data/f700d5b1-dd16-c927-b07f-be4667f3482a/data/hbase/namespace/0eea252164c18f46cb8ed0e29a81f112/info/c734a1dea252426683ecceb73592f120 2023-07-19 05:15:17,112 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for c734a1dea252426683ecceb73592f120 2023-07-19 05:15:17,113 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:39859/user/jenkins/test-data/f700d5b1-dd16-c927-b07f-be4667f3482a/data/hbase/namespace/0eea252164c18f46cb8ed0e29a81f112/info/c734a1dea252426683ecceb73592f120, entries=3, sequenceid=8, filesize=5.0 K 2023-07-19 05:15:17,114 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=90 B at sequenceid=31 (bloomFilter=false), to=hdfs://localhost:39859/user/jenkins/test-data/f700d5b1-dd16-c927-b07f-be4667f3482a/data/hbase/meta/1588230740/.tmp/rep_barrier/d00047c06f8d4d438ff52affd1102273 2023-07-19 05:15:17,115 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~215 B/215, heapSize ~760 B/760, currentSize=0 B/0 for 0eea252164c18f46cb8ed0e29a81f112 in 49ms, sequenceid=8, compaction requested=false 2023-07-19 05:15:17,115 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:namespace' 2023-07-19 05:15:17,121 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:39859/user/jenkins/test-data/f700d5b1-dd16-c927-b07f-be4667f3482a/data/hbase/namespace/0eea252164c18f46cb8ed0e29a81f112/recovered.edits/11.seqid, newMaxSeqId=11, maxSeqId=1 2023-07-19 05:15:17,121 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for d00047c06f8d4d438ff52affd1102273 2023-07-19 05:15:17,122 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:namespace,,1689743715108.0eea252164c18f46cb8ed0e29a81f112. 2023-07-19 05:15:17,123 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 0eea252164c18f46cb8ed0e29a81f112: 2023-07-19 05:15:17,123 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:namespace,,1689743715108.0eea252164c18f46cb8ed0e29a81f112. 2023-07-19 05:15:17,123 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing f43c0f30126da755511ab46ca11cb56e, disabling compactions & flushes 2023-07-19 05:15:17,123 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1689743715247.f43c0f30126da755511ab46ca11cb56e. 2023-07-19 05:15:17,123 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1689743715247.f43c0f30126da755511ab46ca11cb56e. 2023-07-19 05:15:17,123 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1689743715247.f43c0f30126da755511ab46ca11cb56e. after waiting 0 ms 2023-07-19 05:15:17,123 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1689743715247.f43c0f30126da755511ab46ca11cb56e. 2023-07-19 05:15:17,123 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing f43c0f30126da755511ab46ca11cb56e 1/1 column families, dataSize=585 B heapSize=1.04 KB 2023-07-19 05:15:17,128 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-19 05:15:17,129 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-19 05:15:17,132 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-19 05:15:17,139 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=562 B at sequenceid=31 (bloomFilter=false), to=hdfs://localhost:39859/user/jenkins/test-data/f700d5b1-dd16-c927-b07f-be4667f3482a/data/hbase/meta/1588230740/.tmp/table/444a7744fad340e2bf26bafc317d9fed 2023-07-19 05:15:17,148 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 444a7744fad340e2bf26bafc317d9fed 2023-07-19 05:15:17,149 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:39859/user/jenkins/test-data/f700d5b1-dd16-c927-b07f-be4667f3482a/data/hbase/meta/1588230740/.tmp/info/a6794692a6c640299862f09db4ec7a89 as hdfs://localhost:39859/user/jenkins/test-data/f700d5b1-dd16-c927-b07f-be4667f3482a/data/hbase/meta/1588230740/info/a6794692a6c640299862f09db4ec7a89 2023-07-19 05:15:17,150 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=585 B at sequenceid=7 (bloomFilter=true), to=hdfs://localhost:39859/user/jenkins/test-data/f700d5b1-dd16-c927-b07f-be4667f3482a/data/hbase/rsgroup/f43c0f30126da755511ab46ca11cb56e/.tmp/m/06ea3bc885ea4008ac2ff0b886bb7902 2023-07-19 05:15:17,156 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for a6794692a6c640299862f09db4ec7a89 2023-07-19 05:15:17,156 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:39859/user/jenkins/test-data/f700d5b1-dd16-c927-b07f-be4667f3482a/data/hbase/meta/1588230740/info/a6794692a6c640299862f09db4ec7a89, entries=32, sequenceid=31, filesize=8.5 K 2023-07-19 05:15:17,157 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:39859/user/jenkins/test-data/f700d5b1-dd16-c927-b07f-be4667f3482a/data/hbase/rsgroup/f43c0f30126da755511ab46ca11cb56e/.tmp/m/06ea3bc885ea4008ac2ff0b886bb7902 as hdfs://localhost:39859/user/jenkins/test-data/f700d5b1-dd16-c927-b07f-be4667f3482a/data/hbase/rsgroup/f43c0f30126da755511ab46ca11cb56e/m/06ea3bc885ea4008ac2ff0b886bb7902 2023-07-19 05:15:17,157 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:39859/user/jenkins/test-data/f700d5b1-dd16-c927-b07f-be4667f3482a/data/hbase/meta/1588230740/.tmp/rep_barrier/d00047c06f8d4d438ff52affd1102273 as hdfs://localhost:39859/user/jenkins/test-data/f700d5b1-dd16-c927-b07f-be4667f3482a/data/hbase/meta/1588230740/rep_barrier/d00047c06f8d4d438ff52affd1102273 2023-07-19 05:15:17,163 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for d00047c06f8d4d438ff52affd1102273 2023-07-19 05:15:17,164 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:39859/user/jenkins/test-data/f700d5b1-dd16-c927-b07f-be4667f3482a/data/hbase/rsgroup/f43c0f30126da755511ab46ca11cb56e/m/06ea3bc885ea4008ac2ff0b886bb7902, entries=1, sequenceid=7, filesize=4.9 K 2023-07-19 05:15:17,164 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:39859/user/jenkins/test-data/f700d5b1-dd16-c927-b07f-be4667f3482a/data/hbase/meta/1588230740/rep_barrier/d00047c06f8d4d438ff52affd1102273, entries=1, sequenceid=31, filesize=4.9 K 2023-07-19 05:15:17,164 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:39859/user/jenkins/test-data/f700d5b1-dd16-c927-b07f-be4667f3482a/data/hbase/meta/1588230740/.tmp/table/444a7744fad340e2bf26bafc317d9fed as hdfs://localhost:39859/user/jenkins/test-data/f700d5b1-dd16-c927-b07f-be4667f3482a/data/hbase/meta/1588230740/table/444a7744fad340e2bf26bafc317d9fed 2023-07-19 05:15:17,165 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~585 B/585, heapSize ~1.02 KB/1048, currentSize=0 B/0 for f43c0f30126da755511ab46ca11cb56e in 41ms, sequenceid=7, compaction requested=false 2023-07-19 05:15:17,165 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:rsgroup' 2023-07-19 05:15:17,172 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:39859/user/jenkins/test-data/f700d5b1-dd16-c927-b07f-be4667f3482a/data/hbase/rsgroup/f43c0f30126da755511ab46ca11cb56e/recovered.edits/10.seqid, newMaxSeqId=10, maxSeqId=1 2023-07-19 05:15:17,172 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 444a7744fad340e2bf26bafc317d9fed 2023-07-19 05:15:17,173 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:39859/user/jenkins/test-data/f700d5b1-dd16-c927-b07f-be4667f3482a/data/hbase/meta/1588230740/table/444a7744fad340e2bf26bafc317d9fed, entries=8, sequenceid=31, filesize=5.2 K 2023-07-19 05:15:17,173 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-19 05:15:17,173 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1689743715247.f43c0f30126da755511ab46ca11cb56e. 2023-07-19 05:15:17,173 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for f43c0f30126da755511ab46ca11cb56e: 2023-07-19 05:15:17,173 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:rsgroup,,1689743715247.f43c0f30126da755511ab46ca11cb56e. 2023-07-19 05:15:17,174 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~5.89 KB/6036, heapSize ~11.05 KB/11312, currentSize=0 B/0 for 1588230740 in 108ms, sequenceid=31, compaction requested=false 2023-07-19 05:15:17,174 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:meta' 2023-07-19 05:15:17,185 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:39859/user/jenkins/test-data/f700d5b1-dd16-c927-b07f-be4667f3482a/data/hbase/meta/1588230740/recovered.edits/34.seqid, newMaxSeqId=34, maxSeqId=1 2023-07-19 05:15:17,185 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-19 05:15:17,186 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-19 05:15:17,186 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-19 05:15:17,186 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:meta,,1.1588230740 2023-07-19 05:15:17,234 DEBUG [Listener at localhost/43345-EventThread] zookeeper.ZKWatcher(600): regionserver:45187-0x1017c0168c00003, quorum=127.0.0.1:58776, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-19 05:15:17,234 INFO [RS:2;jenkins-hbase4:45187] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,45187,1689743714265; zookeeper connection closed. 2023-07-19 05:15:17,234 DEBUG [Listener at localhost/43345-EventThread] zookeeper.ZKWatcher(600): regionserver:45187-0x1017c0168c00003, quorum=127.0.0.1:58776, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-19 05:15:17,236 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@6edf1bf6] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@6edf1bf6 2023-07-19 05:15:17,256 INFO [RS:0;jenkins-hbase4:38277] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,38277,1689743713946; all regions closed. 2023-07-19 05:15:17,256 DEBUG [RS:0;jenkins-hbase4:38277] quotas.QuotaCache(100): Stopping QuotaRefresherChore chore. 2023-07-19 05:15:17,256 INFO [RS:1;jenkins-hbase4:37135] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,37135,1689743714109; all regions closed. 2023-07-19 05:15:17,257 DEBUG [RS:1;jenkins-hbase4:37135] quotas.QuotaCache(100): Stopping QuotaRefresherChore chore. 2023-07-19 05:15:17,267 DEBUG [RS:0;jenkins-hbase4:38277] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/f700d5b1-dd16-c927-b07f-be4667f3482a/oldWALs 2023-07-19 05:15:17,267 INFO [RS:0;jenkins-hbase4:38277] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C38277%2C1689743713946:(num 1689743714960) 2023-07-19 05:15:17,267 DEBUG [RS:0;jenkins-hbase4:38277] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-19 05:15:17,267 INFO [RS:0;jenkins-hbase4:38277] regionserver.LeaseManager(133): Closed leases 2023-07-19 05:15:17,267 INFO [RS:0;jenkins-hbase4:38277] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-07-19 05:15:17,267 INFO [RS:0;jenkins-hbase4:38277] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-19 05:15:17,267 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-19 05:15:17,267 INFO [RS:0;jenkins-hbase4:38277] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-19 05:15:17,267 INFO [RS:0;jenkins-hbase4:38277] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-19 05:15:17,268 INFO [RS:0;jenkins-hbase4:38277] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:38277 2023-07-19 05:15:17,270 DEBUG [RS:1;jenkins-hbase4:37135] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/f700d5b1-dd16-c927-b07f-be4667f3482a/oldWALs 2023-07-19 05:15:17,270 INFO [RS:1;jenkins-hbase4:37135] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C37135%2C1689743714109.meta:.meta(num 1689743715054) 2023-07-19 05:15:17,271 DEBUG [Listener at localhost/43345-EventThread] zookeeper.ZKWatcher(600): regionserver:38277-0x1017c0168c00001, quorum=127.0.0.1:58776, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,38277,1689743713946 2023-07-19 05:15:17,271 DEBUG [Listener at localhost/43345-EventThread] zookeeper.ZKWatcher(600): regionserver:37135-0x1017c0168c00002, quorum=127.0.0.1:58776, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,38277,1689743713946 2023-07-19 05:15:17,271 DEBUG [Listener at localhost/43345-EventThread] zookeeper.ZKWatcher(600): master:37145-0x1017c0168c00000, quorum=127.0.0.1:58776, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-19 05:15:17,275 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,38277,1689743713946] 2023-07-19 05:15:17,275 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,38277,1689743713946; numProcessing=2 2023-07-19 05:15:17,277 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,38277,1689743713946 already deleted, retry=false 2023-07-19 05:15:17,277 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,38277,1689743713946 expired; onlineServers=1 2023-07-19 05:15:17,281 DEBUG [RS:1;jenkins-hbase4:37135] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/f700d5b1-dd16-c927-b07f-be4667f3482a/oldWALs 2023-07-19 05:15:17,281 INFO [RS:1;jenkins-hbase4:37135] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C37135%2C1689743714109:(num 1689743714961) 2023-07-19 05:15:17,281 DEBUG [RS:1;jenkins-hbase4:37135] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-19 05:15:17,281 INFO [RS:1;jenkins-hbase4:37135] regionserver.LeaseManager(133): Closed leases 2023-07-19 05:15:17,281 INFO [RS:1;jenkins-hbase4:37135] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-07-19 05:15:17,281 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-19 05:15:17,282 INFO [RS:1;jenkins-hbase4:37135] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:37135 2023-07-19 05:15:17,286 DEBUG [Listener at localhost/43345-EventThread] zookeeper.ZKWatcher(600): regionserver:37135-0x1017c0168c00002, quorum=127.0.0.1:58776, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,37135,1689743714109 2023-07-19 05:15:17,286 DEBUG [Listener at localhost/43345-EventThread] zookeeper.ZKWatcher(600): master:37145-0x1017c0168c00000, quorum=127.0.0.1:58776, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-19 05:15:17,287 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,37135,1689743714109] 2023-07-19 05:15:17,287 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,37135,1689743714109; numProcessing=3 2023-07-19 05:15:17,290 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,37135,1689743714109 already deleted, retry=false 2023-07-19 05:15:17,290 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,37135,1689743714109 expired; onlineServers=0 2023-07-19 05:15:17,290 INFO [RegionServerTracker-0] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,37145,1689743713754' ***** 2023-07-19 05:15:17,290 INFO [RegionServerTracker-0] regionserver.HRegionServer(2311): STOPPED: Cluster shutdown set; onlineServer=0 2023-07-19 05:15:17,291 DEBUG [M:0;jenkins-hbase4:37145] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@485d9255, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-19 05:15:17,291 INFO [M:0;jenkins-hbase4:37145] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-19 05:15:17,292 DEBUG [Listener at localhost/43345-EventThread] zookeeper.ZKWatcher(600): master:37145-0x1017c0168c00000, quorum=127.0.0.1:58776, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/master 2023-07-19 05:15:17,292 DEBUG [Listener at localhost/43345-EventThread] zookeeper.ZKWatcher(600): master:37145-0x1017c0168c00000, quorum=127.0.0.1:58776, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-19 05:15:17,293 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:37145-0x1017c0168c00000, quorum=127.0.0.1:58776, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-19 05:15:17,293 INFO [M:0;jenkins-hbase4:37145] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@38a9d218{master,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/master} 2023-07-19 05:15:17,293 INFO [M:0;jenkins-hbase4:37145] server.AbstractConnector(383): Stopped ServerConnector@78e67007{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-19 05:15:17,293 INFO [M:0;jenkins-hbase4:37145] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-19 05:15:17,293 INFO [M:0;jenkins-hbase4:37145] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@5f546421{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-19 05:15:17,294 INFO [M:0;jenkins-hbase4:37145] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@6e960f4c{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/29889217-1cb1-ea8b-5ca7-d011d0feb9b8/hadoop.log.dir/,STOPPED} 2023-07-19 05:15:17,294 INFO [M:0;jenkins-hbase4:37145] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,37145,1689743713754 2023-07-19 05:15:17,294 INFO [M:0;jenkins-hbase4:37145] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,37145,1689743713754; all regions closed. 2023-07-19 05:15:17,294 DEBUG [M:0;jenkins-hbase4:37145] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-19 05:15:17,294 INFO [M:0;jenkins-hbase4:37145] master.HMaster(1491): Stopping master jetty server 2023-07-19 05:15:17,295 INFO [M:0;jenkins-hbase4:37145] server.AbstractConnector(383): Stopped ServerConnector@7c7e7546{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-19 05:15:17,295 DEBUG [M:0;jenkins-hbase4:37145] cleaner.LogCleaner(198): Cancelling LogCleaner 2023-07-19 05:15:17,295 WARN [OldWALsCleaner-0] cleaner.LogCleaner(186): Interrupted while cleaning old WALs, will try to clean it next round. Exiting. 2023-07-19 05:15:17,295 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1689743714648] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1689743714648,5,FailOnTimeoutGroup] 2023-07-19 05:15:17,295 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1689743714648] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1689743714648,5,FailOnTimeoutGroup] 2023-07-19 05:15:17,295 DEBUG [M:0;jenkins-hbase4:37145] cleaner.HFileCleaner(317): Stopping file delete threads 2023-07-19 05:15:17,297 INFO [M:0;jenkins-hbase4:37145] master.MasterMobCompactionThread(168): Waiting for Mob Compaction Thread to finish... 2023-07-19 05:15:17,297 INFO [M:0;jenkins-hbase4:37145] master.MasterMobCompactionThread(168): Waiting for Region Server Mob Compaction Thread to finish... 2023-07-19 05:15:17,297 INFO [M:0;jenkins-hbase4:37145] hbase.ChoreService(369): Chore service for: master/jenkins-hbase4:0 had [ScheduledChore name=QuotaObserverChore, period=60000, unit=MILLISECONDS] on shutdown 2023-07-19 05:15:17,297 DEBUG [M:0;jenkins-hbase4:37145] master.HMaster(1512): Stopping service threads 2023-07-19 05:15:17,297 INFO [M:0;jenkins-hbase4:37145] procedure2.RemoteProcedureDispatcher(119): Stopping procedure remote dispatcher 2023-07-19 05:15:17,298 ERROR [M:0;jenkins-hbase4:37145] procedure2.ProcedureExecutor(653): ThreadGroup java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] contains running threads; null: See STDOUT java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] 2023-07-19 05:15:17,298 INFO [M:0;jenkins-hbase4:37145] region.RegionProcedureStore(113): Stopping the Region Procedure Store, isAbort=false 2023-07-19 05:15:17,298 DEBUG [normalizer-worker-0] normalizer.RegionNormalizerWorker(174): interrupt detected. terminating. 2023-07-19 05:15:17,299 DEBUG [M:0;jenkins-hbase4:37145] zookeeper.ZKUtil(398): master:37145-0x1017c0168c00000, quorum=127.0.0.1:58776, baseZNode=/hbase Unable to get data of znode /hbase/master because node does not exist (not an error) 2023-07-19 05:15:17,299 WARN [M:0;jenkins-hbase4:37145] master.ActiveMasterManager(326): Failed get of master address: java.io.IOException: Can't get master address from ZooKeeper; znode data == null 2023-07-19 05:15:17,299 INFO [M:0;jenkins-hbase4:37145] assignment.AssignmentManager(315): Stopping assignment manager 2023-07-19 05:15:17,299 INFO [M:0;jenkins-hbase4:37145] region.MasterRegion(167): Closing local region {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, isAbort=false 2023-07-19 05:15:17,299 DEBUG [M:0;jenkins-hbase4:37145] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-07-19 05:15:17,299 INFO [M:0;jenkins-hbase4:37145] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-19 05:15:17,300 DEBUG [M:0;jenkins-hbase4:37145] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-19 05:15:17,300 DEBUG [M:0;jenkins-hbase4:37145] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-07-19 05:15:17,300 DEBUG [M:0;jenkins-hbase4:37145] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-19 05:15:17,300 INFO [M:0;jenkins-hbase4:37145] regionserver.HRegion(2745): Flushing 1595e783b53d99cd5eef43b6debb2682 1/1 column families, dataSize=92.95 KB heapSize=109.11 KB 2023-07-19 05:15:17,313 INFO [M:0;jenkins-hbase4:37145] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=92.95 KB at sequenceid=194 (bloomFilter=true), to=hdfs://localhost:39859/user/jenkins/test-data/f700d5b1-dd16-c927-b07f-be4667f3482a/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/ced1f989df26446eafb2213df18cb2d3 2023-07-19 05:15:17,318 DEBUG [M:0;jenkins-hbase4:37145] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:39859/user/jenkins/test-data/f700d5b1-dd16-c927-b07f-be4667f3482a/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/ced1f989df26446eafb2213df18cb2d3 as hdfs://localhost:39859/user/jenkins/test-data/f700d5b1-dd16-c927-b07f-be4667f3482a/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/ced1f989df26446eafb2213df18cb2d3 2023-07-19 05:15:17,323 INFO [M:0;jenkins-hbase4:37145] regionserver.HStore(1080): Added hdfs://localhost:39859/user/jenkins/test-data/f700d5b1-dd16-c927-b07f-be4667f3482a/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/ced1f989df26446eafb2213df18cb2d3, entries=24, sequenceid=194, filesize=12.4 K 2023-07-19 05:15:17,324 INFO [M:0;jenkins-hbase4:37145] regionserver.HRegion(2948): Finished flush of dataSize ~92.95 KB/95185, heapSize ~109.09 KB/111712, currentSize=0 B/0 for 1595e783b53d99cd5eef43b6debb2682 in 24ms, sequenceid=194, compaction requested=false 2023-07-19 05:15:17,328 INFO [M:0;jenkins-hbase4:37145] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-19 05:15:17,328 DEBUG [M:0;jenkins-hbase4:37145] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-19 05:15:17,332 INFO [M:0;jenkins-hbase4:37145] flush.MasterFlushTableProcedureManager(83): stop: server shutting down. 2023-07-19 05:15:17,332 INFO [master:store-WAL-Roller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-19 05:15:17,332 INFO [M:0;jenkins-hbase4:37145] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:37145 2023-07-19 05:15:17,334 DEBUG [M:0;jenkins-hbase4:37145] zookeeper.RecoverableZooKeeper(172): Node /hbase/rs/jenkins-hbase4.apache.org,37145,1689743713754 already deleted, retry=false 2023-07-19 05:15:17,737 DEBUG [Listener at localhost/43345-EventThread] zookeeper.ZKWatcher(600): master:37145-0x1017c0168c00000, quorum=127.0.0.1:58776, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-19 05:15:17,737 INFO [M:0;jenkins-hbase4:37145] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,37145,1689743713754; zookeeper connection closed. 2023-07-19 05:15:17,737 DEBUG [Listener at localhost/43345-EventThread] zookeeper.ZKWatcher(600): master:37145-0x1017c0168c00000, quorum=127.0.0.1:58776, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-19 05:15:17,837 DEBUG [Listener at localhost/43345-EventThread] zookeeper.ZKWatcher(600): regionserver:37135-0x1017c0168c00002, quorum=127.0.0.1:58776, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-19 05:15:17,837 INFO [RS:1;jenkins-hbase4:37135] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,37135,1689743714109; zookeeper connection closed. 2023-07-19 05:15:17,837 DEBUG [Listener at localhost/43345-EventThread] zookeeper.ZKWatcher(600): regionserver:37135-0x1017c0168c00002, quorum=127.0.0.1:58776, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-19 05:15:17,837 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@6cd99fb5] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@6cd99fb5 2023-07-19 05:15:17,938 DEBUG [Listener at localhost/43345-EventThread] zookeeper.ZKWatcher(600): regionserver:38277-0x1017c0168c00001, quorum=127.0.0.1:58776, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-19 05:15:17,938 INFO [RS:0;jenkins-hbase4:38277] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,38277,1689743713946; zookeeper connection closed. 2023-07-19 05:15:17,938 DEBUG [Listener at localhost/43345-EventThread] zookeeper.ZKWatcher(600): regionserver:38277-0x1017c0168c00001, quorum=127.0.0.1:58776, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-19 05:15:17,938 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@1fd4e798] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@1fd4e798 2023-07-19 05:15:17,938 INFO [Listener at localhost/43345] util.JVMClusterUtil(335): Shutdown of 1 master(s) and 3 regionserver(s) complete 2023-07-19 05:15:17,939 WARN [Listener at localhost/43345] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-19 05:15:17,942 INFO [Listener at localhost/43345] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-19 05:15:18,048 WARN [BP-162100135-172.31.14.131-1689743712768 heartbeating to localhost/127.0.0.1:39859] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-19 05:15:18,048 WARN [BP-162100135-172.31.14.131-1689743712768 heartbeating to localhost/127.0.0.1:39859] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-162100135-172.31.14.131-1689743712768 (Datanode Uuid 8f23f68a-4b69-4c65-bdd5-e5fbc1398a93) service to localhost/127.0.0.1:39859 2023-07-19 05:15:18,049 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/29889217-1cb1-ea8b-5ca7-d011d0feb9b8/cluster_a40a042b-8387-eb97-eb03-b118443e501d/dfs/data/data5/current/BP-162100135-172.31.14.131-1689743712768] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-19 05:15:18,049 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/29889217-1cb1-ea8b-5ca7-d011d0feb9b8/cluster_a40a042b-8387-eb97-eb03-b118443e501d/dfs/data/data6/current/BP-162100135-172.31.14.131-1689743712768] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-19 05:15:18,052 WARN [Listener at localhost/43345] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-19 05:15:18,055 INFO [Listener at localhost/43345] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-19 05:15:18,162 WARN [BP-162100135-172.31.14.131-1689743712768 heartbeating to localhost/127.0.0.1:39859] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-19 05:15:18,162 WARN [BP-162100135-172.31.14.131-1689743712768 heartbeating to localhost/127.0.0.1:39859] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-162100135-172.31.14.131-1689743712768 (Datanode Uuid 899c752e-603d-49ad-9bd2-3f0601e92e32) service to localhost/127.0.0.1:39859 2023-07-19 05:15:18,163 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/29889217-1cb1-ea8b-5ca7-d011d0feb9b8/cluster_a40a042b-8387-eb97-eb03-b118443e501d/dfs/data/data3/current/BP-162100135-172.31.14.131-1689743712768] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-19 05:15:18,164 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/29889217-1cb1-ea8b-5ca7-d011d0feb9b8/cluster_a40a042b-8387-eb97-eb03-b118443e501d/dfs/data/data4/current/BP-162100135-172.31.14.131-1689743712768] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-19 05:15:18,165 WARN [Listener at localhost/43345] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-19 05:15:18,168 INFO [Listener at localhost/43345] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-19 05:15:18,272 WARN [BP-162100135-172.31.14.131-1689743712768 heartbeating to localhost/127.0.0.1:39859] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-19 05:15:18,272 WARN [BP-162100135-172.31.14.131-1689743712768 heartbeating to localhost/127.0.0.1:39859] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-162100135-172.31.14.131-1689743712768 (Datanode Uuid a0ddb556-74cf-41dd-bf72-3efb3b220d99) service to localhost/127.0.0.1:39859 2023-07-19 05:15:18,273 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/29889217-1cb1-ea8b-5ca7-d011d0feb9b8/cluster_a40a042b-8387-eb97-eb03-b118443e501d/dfs/data/data1/current/BP-162100135-172.31.14.131-1689743712768] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-19 05:15:18,273 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/29889217-1cb1-ea8b-5ca7-d011d0feb9b8/cluster_a40a042b-8387-eb97-eb03-b118443e501d/dfs/data/data2/current/BP-162100135-172.31.14.131-1689743712768] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-19 05:15:18,285 INFO [Listener at localhost/43345] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-19 05:15:18,401 INFO [Listener at localhost/43345] zookeeper.MiniZooKeeperCluster(344): Shutdown MiniZK cluster with all ZK servers 2023-07-19 05:15:18,441 INFO [Listener at localhost/43345] hbase.HBaseTestingUtility(1293): Minicluster is down 2023-07-19 05:15:18,441 INFO [Listener at localhost/43345] hbase.HBaseTestingUtility(1068): Starting up minicluster with option: StartMiniClusterOption{numMasters=1, masterClass=null, numRegionServers=3, rsPorts=, rsClass=null, numDataNodes=3, dataNodeHosts=null, numZkServers=1, createRootDir=false, createWALDir=false} 2023-07-19 05:15:18,441 INFO [Listener at localhost/43345] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.log.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/29889217-1cb1-ea8b-5ca7-d011d0feb9b8/hadoop.log.dir so I do NOT create it in target/test-data/4e3b5c94-7503-0e39-0800-596c759cd5e5 2023-07-19 05:15:18,441 INFO [Listener at localhost/43345] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.tmp.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/29889217-1cb1-ea8b-5ca7-d011d0feb9b8/hadoop.tmp.dir so I do NOT create it in target/test-data/4e3b5c94-7503-0e39-0800-596c759cd5e5 2023-07-19 05:15:18,441 INFO [Listener at localhost/43345] hbase.HBaseZKTestingUtility(82): Created new mini-cluster data directory: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/4e3b5c94-7503-0e39-0800-596c759cd5e5/cluster_505e0513-5acf-88d4-b472-60a449ac459e, deleteOnExit=true 2023-07-19 05:15:18,441 INFO [Listener at localhost/43345] hbase.HBaseTestingUtility(1082): STARTING DFS 2023-07-19 05:15:18,441 INFO [Listener at localhost/43345] hbase.HBaseTestingUtility(772): Setting test.cache.data to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/4e3b5c94-7503-0e39-0800-596c759cd5e5/test.cache.data in system properties and HBase conf 2023-07-19 05:15:18,441 INFO [Listener at localhost/43345] hbase.HBaseTestingUtility(772): Setting hadoop.tmp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/4e3b5c94-7503-0e39-0800-596c759cd5e5/hadoop.tmp.dir in system properties and HBase conf 2023-07-19 05:15:18,441 INFO [Listener at localhost/43345] hbase.HBaseTestingUtility(772): Setting hadoop.log.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/4e3b5c94-7503-0e39-0800-596c759cd5e5/hadoop.log.dir in system properties and HBase conf 2023-07-19 05:15:18,441 INFO [Listener at localhost/43345] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.local.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/4e3b5c94-7503-0e39-0800-596c759cd5e5/mapreduce.cluster.local.dir in system properties and HBase conf 2023-07-19 05:15:18,442 INFO [Listener at localhost/43345] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.temp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/4e3b5c94-7503-0e39-0800-596c759cd5e5/mapreduce.cluster.temp.dir in system properties and HBase conf 2023-07-19 05:15:18,442 INFO [Listener at localhost/43345] hbase.HBaseTestingUtility(759): read short circuit is OFF 2023-07-19 05:15:18,442 DEBUG [Listener at localhost/43345] fs.HFileSystem(308): The file system is not a DistributedFileSystem. Skipping on block location reordering 2023-07-19 05:15:18,442 INFO [Listener at localhost/43345] hbase.HBaseTestingUtility(772): Setting yarn.node-labels.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/4e3b5c94-7503-0e39-0800-596c759cd5e5/yarn.node-labels.fs-store.root-dir in system properties and HBase conf 2023-07-19 05:15:18,442 INFO [Listener at localhost/43345] hbase.HBaseTestingUtility(772): Setting yarn.node-attribute.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/4e3b5c94-7503-0e39-0800-596c759cd5e5/yarn.node-attribute.fs-store.root-dir in system properties and HBase conf 2023-07-19 05:15:18,442 INFO [Listener at localhost/43345] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.log-dirs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/4e3b5c94-7503-0e39-0800-596c759cd5e5/yarn.nodemanager.log-dirs in system properties and HBase conf 2023-07-19 05:15:18,442 INFO [Listener at localhost/43345] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/4e3b5c94-7503-0e39-0800-596c759cd5e5/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-07-19 05:15:18,442 INFO [Listener at localhost/43345] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.active-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/4e3b5c94-7503-0e39-0800-596c759cd5e5/yarn.timeline-service.entity-group-fs-store.active-dir in system properties and HBase conf 2023-07-19 05:15:18,443 INFO [Listener at localhost/43345] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.done-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/4e3b5c94-7503-0e39-0800-596c759cd5e5/yarn.timeline-service.entity-group-fs-store.done-dir in system properties and HBase conf 2023-07-19 05:15:18,443 INFO [Listener at localhost/43345] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/4e3b5c94-7503-0e39-0800-596c759cd5e5/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-07-19 05:15:18,443 INFO [Listener at localhost/43345] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/4e3b5c94-7503-0e39-0800-596c759cd5e5/dfs.journalnode.edits.dir in system properties and HBase conf 2023-07-19 05:15:18,443 INFO [Listener at localhost/43345] hbase.HBaseTestingUtility(772): Setting dfs.datanode.shared.file.descriptor.paths to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/4e3b5c94-7503-0e39-0800-596c759cd5e5/dfs.datanode.shared.file.descriptor.paths in system properties and HBase conf 2023-07-19 05:15:18,443 INFO [Listener at localhost/43345] hbase.HBaseTestingUtility(772): Setting nfs.dump.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/4e3b5c94-7503-0e39-0800-596c759cd5e5/nfs.dump.dir in system properties and HBase conf 2023-07-19 05:15:18,443 INFO [Listener at localhost/43345] hbase.HBaseTestingUtility(772): Setting java.io.tmpdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/4e3b5c94-7503-0e39-0800-596c759cd5e5/java.io.tmpdir in system properties and HBase conf 2023-07-19 05:15:18,443 INFO [Listener at localhost/43345] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/4e3b5c94-7503-0e39-0800-596c759cd5e5/dfs.journalnode.edits.dir in system properties and HBase conf 2023-07-19 05:15:18,443 INFO [Listener at localhost/43345] hbase.HBaseTestingUtility(772): Setting dfs.provided.aliasmap.inmemory.leveldb.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/4e3b5c94-7503-0e39-0800-596c759cd5e5/dfs.provided.aliasmap.inmemory.leveldb.dir in system properties and HBase conf 2023-07-19 05:15:18,443 INFO [Listener at localhost/43345] hbase.HBaseTestingUtility(772): Setting fs.s3a.committer.staging.tmp.path to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/4e3b5c94-7503-0e39-0800-596c759cd5e5/fs.s3a.committer.staging.tmp.path in system properties and HBase conf Formatting using clusterid: testClusterID 2023-07-19 05:15:18,447 WARN [Listener at localhost/43345] conf.Configuration(1701): No unit for dfs.heartbeat.interval(3) assuming SECONDS 2023-07-19 05:15:18,447 WARN [Listener at localhost/43345] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-07-19 05:15:18,490 WARN [Listener at localhost/43345] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-19 05:15:18,492 INFO [Listener at localhost/43345] log.Slf4jLog(67): jetty-6.1.26 2023-07-19 05:15:18,497 INFO [Listener at localhost/43345] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/hdfs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/4e3b5c94-7503-0e39-0800-596c759cd5e5/java.io.tmpdir/Jetty_localhost_40231_hdfs____gi8rlq/webapp 2023-07-19 05:15:18,499 DEBUG [Listener at localhost/43345-EventThread] zookeeper.ZKWatcher(600): VerifyingRSGroupAdminClient-0x1017c0168c0000a, quorum=127.0.0.1:58776, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Disconnected, path=null 2023-07-19 05:15:18,499 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(630): VerifyingRSGroupAdminClient-0x1017c0168c0000a, quorum=127.0.0.1:58776, baseZNode=/hbase Received Disconnected from ZooKeeper, ignoring 2023-07-19 05:15:18,589 INFO [Listener at localhost/43345] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:40231 2023-07-19 05:15:18,593 WARN [Listener at localhost/43345] conf.Configuration(1701): No unit for dfs.heartbeat.interval(3) assuming SECONDS 2023-07-19 05:15:18,594 WARN [Listener at localhost/43345] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-07-19 05:15:18,649 WARN [Listener at localhost/44175] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-19 05:15:18,662 WARN [Listener at localhost/44175] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-19 05:15:18,665 WARN [Listener at localhost/44175] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-19 05:15:18,666 INFO [Listener at localhost/44175] log.Slf4jLog(67): jetty-6.1.26 2023-07-19 05:15:18,671 INFO [Listener at localhost/44175] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/4e3b5c94-7503-0e39-0800-596c759cd5e5/java.io.tmpdir/Jetty_localhost_40815_datanode____.x49rqx/webapp 2023-07-19 05:15:18,764 INFO [Listener at localhost/44175] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:40815 2023-07-19 05:15:18,772 WARN [Listener at localhost/33953] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-19 05:15:18,787 WARN [Listener at localhost/33953] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-19 05:15:18,790 WARN [Listener at localhost/33953] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-19 05:15:18,791 INFO [Listener at localhost/33953] log.Slf4jLog(67): jetty-6.1.26 2023-07-19 05:15:18,796 INFO [Listener at localhost/33953] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/4e3b5c94-7503-0e39-0800-596c759cd5e5/java.io.tmpdir/Jetty_localhost_43567_datanode____sc5yri/webapp 2023-07-19 05:15:18,873 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x9f9dea66129e4feb: Processing first storage report for DS-345d74be-872e-4d52-9cb2-51f897ffa631 from datanode f68aab75-1165-4ce2-bc20-c73dccd09c21 2023-07-19 05:15:18,873 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x9f9dea66129e4feb: from storage DS-345d74be-872e-4d52-9cb2-51f897ffa631 node DatanodeRegistration(127.0.0.1:41985, datanodeUuid=f68aab75-1165-4ce2-bc20-c73dccd09c21, infoPort=35743, infoSecurePort=0, ipcPort=33953, storageInfo=lv=-57;cid=testClusterID;nsid=1645531088;c=1689743718450), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-19 05:15:18,873 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x9f9dea66129e4feb: Processing first storage report for DS-e36d35c3-e3bd-4a32-a7e4-ce79708b9165 from datanode f68aab75-1165-4ce2-bc20-c73dccd09c21 2023-07-19 05:15:18,873 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x9f9dea66129e4feb: from storage DS-e36d35c3-e3bd-4a32-a7e4-ce79708b9165 node DatanodeRegistration(127.0.0.1:41985, datanodeUuid=f68aab75-1165-4ce2-bc20-c73dccd09c21, infoPort=35743, infoSecurePort=0, ipcPort=33953, storageInfo=lv=-57;cid=testClusterID;nsid=1645531088;c=1689743718450), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-19 05:15:18,900 INFO [Listener at localhost/33953] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:43567 2023-07-19 05:15:18,907 WARN [Listener at localhost/46769] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-19 05:15:18,925 WARN [Listener at localhost/46769] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-19 05:15:18,927 WARN [Listener at localhost/46769] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-19 05:15:18,928 INFO [Listener at localhost/46769] log.Slf4jLog(67): jetty-6.1.26 2023-07-19 05:15:18,932 INFO [Listener at localhost/46769] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/4e3b5c94-7503-0e39-0800-596c759cd5e5/java.io.tmpdir/Jetty_localhost_45235_datanode____.sk2cva/webapp 2023-07-19 05:15:19,010 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x568cbe4160c250b1: Processing first storage report for DS-f3cc29e5-9316-4bf7-8e06-c992433b44f8 from datanode 06352d53-0036-40f4-8dee-0816b87884b2 2023-07-19 05:15:19,010 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x568cbe4160c250b1: from storage DS-f3cc29e5-9316-4bf7-8e06-c992433b44f8 node DatanodeRegistration(127.0.0.1:35727, datanodeUuid=06352d53-0036-40f4-8dee-0816b87884b2, infoPort=41533, infoSecurePort=0, ipcPort=46769, storageInfo=lv=-57;cid=testClusterID;nsid=1645531088;c=1689743718450), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-19 05:15:19,010 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x568cbe4160c250b1: Processing first storage report for DS-2fad46de-2ffb-4aa8-ba4b-1c61090d1f4e from datanode 06352d53-0036-40f4-8dee-0816b87884b2 2023-07-19 05:15:19,010 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x568cbe4160c250b1: from storage DS-2fad46de-2ffb-4aa8-ba4b-1c61090d1f4e node DatanodeRegistration(127.0.0.1:35727, datanodeUuid=06352d53-0036-40f4-8dee-0816b87884b2, infoPort=41533, infoSecurePort=0, ipcPort=46769, storageInfo=lv=-57;cid=testClusterID;nsid=1645531088;c=1689743718450), blocks: 0, hasStaleStorage: false, processing time: 1 msecs, invalidatedBlocks: 0 2023-07-19 05:15:19,034 INFO [Listener at localhost/46769] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:45235 2023-07-19 05:15:19,042 WARN [Listener at localhost/42441] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-19 05:15:19,134 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x15affd1a23d9cdf4: Processing first storage report for DS-74be87a0-13f3-435a-adac-f9bd6c929921 from datanode e2dce496-373b-456d-b505-a160fc58264e 2023-07-19 05:15:19,134 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x15affd1a23d9cdf4: from storage DS-74be87a0-13f3-435a-adac-f9bd6c929921 node DatanodeRegistration(127.0.0.1:38475, datanodeUuid=e2dce496-373b-456d-b505-a160fc58264e, infoPort=44307, infoSecurePort=0, ipcPort=42441, storageInfo=lv=-57;cid=testClusterID;nsid=1645531088;c=1689743718450), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-19 05:15:19,134 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x15affd1a23d9cdf4: Processing first storage report for DS-de047c83-352c-4a85-9fc9-ae5fb41fd126 from datanode e2dce496-373b-456d-b505-a160fc58264e 2023-07-19 05:15:19,134 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x15affd1a23d9cdf4: from storage DS-de047c83-352c-4a85-9fc9-ae5fb41fd126 node DatanodeRegistration(127.0.0.1:38475, datanodeUuid=e2dce496-373b-456d-b505-a160fc58264e, infoPort=44307, infoSecurePort=0, ipcPort=42441, storageInfo=lv=-57;cid=testClusterID;nsid=1645531088;c=1689743718450), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-19 05:15:19,149 DEBUG [Listener at localhost/42441] hbase.HBaseTestingUtility(649): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/4e3b5c94-7503-0e39-0800-596c759cd5e5 2023-07-19 05:15:19,151 INFO [Listener at localhost/42441] zookeeper.MiniZooKeeperCluster(258): Started connectionTimeout=30000, dir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/4e3b5c94-7503-0e39-0800-596c759cd5e5/cluster_505e0513-5acf-88d4-b472-60a449ac459e/zookeeper_0, clientPort=51693, secureClientPort=-1, dataDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/4e3b5c94-7503-0e39-0800-596c759cd5e5/cluster_505e0513-5acf-88d4-b472-60a449ac459e/zookeeper_0/version-2, dataDirSize=424 dataLogDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/4e3b5c94-7503-0e39-0800-596c759cd5e5/cluster_505e0513-5acf-88d4-b472-60a449ac459e/zookeeper_0/version-2, dataLogSize=424 tickTime=2000, maxClientCnxns=300, minSessionTimeout=4000, maxSessionTimeout=40000, serverId=0 2023-07-19 05:15:19,152 INFO [Listener at localhost/42441] zookeeper.MiniZooKeeperCluster(283): Started MiniZooKeeperCluster and ran 'stat' on client port=51693 2023-07-19 05:15:19,152 INFO [Listener at localhost/42441] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-19 05:15:19,153 INFO [Listener at localhost/42441] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-19 05:15:19,570 INFO [Listener at localhost/42441] util.FSUtils(471): Created version file at hdfs://localhost:44175/user/jenkins/test-data/760309a7-bdb9-f7fa-7419-a01ae2b00b2f with version=8 2023-07-19 05:15:19,570 INFO [Listener at localhost/42441] hbase.HBaseTestingUtility(1408): The hbase.fs.tmp.dir is set to hdfs://localhost:34189/user/jenkins/test-data/489e192a-3d46-a3c8-c9f8-2c4148fd15fd/hbase-staging 2023-07-19 05:15:19,571 DEBUG [Listener at localhost/42441] hbase.LocalHBaseCluster(134): Setting Master Port to random. 2023-07-19 05:15:19,571 DEBUG [Listener at localhost/42441] hbase.LocalHBaseCluster(141): Setting RegionServer Port to random. 2023-07-19 05:15:19,571 DEBUG [Listener at localhost/42441] hbase.LocalHBaseCluster(151): Setting RS InfoServer Port to random. 2023-07-19 05:15:19,571 DEBUG [Listener at localhost/42441] hbase.LocalHBaseCluster(159): Setting Master InfoServer Port to random. 2023-07-19 05:15:19,572 INFO [Listener at localhost/42441] client.ConnectionUtils(127): master/jenkins-hbase4:0 server-side Connection retries=45 2023-07-19 05:15:19,572 INFO [Listener at localhost/42441] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-19 05:15:19,572 INFO [Listener at localhost/42441] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-19 05:15:19,572 INFO [Listener at localhost/42441] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-19 05:15:19,572 INFO [Listener at localhost/42441] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-19 05:15:19,572 INFO [Listener at localhost/42441] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-19 05:15:19,573 INFO [Listener at localhost/42441] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.MasterService, hbase.pb.RegionServerStatusService, hbase.pb.LockService, hbase.pb.HbckService, hbase.pb.ClientMetaService, hbase.pb.ClientService, hbase.pb.AdminService 2023-07-19 05:15:19,574 INFO [Listener at localhost/42441] ipc.NettyRpcServer(120): Bind to /172.31.14.131:39261 2023-07-19 05:15:19,575 INFO [Listener at localhost/42441] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-19 05:15:19,576 INFO [Listener at localhost/42441] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-19 05:15:19,577 INFO [Listener at localhost/42441] zookeeper.RecoverableZooKeeper(93): Process identifier=master:39261 connecting to ZooKeeper ensemble=127.0.0.1:51693 2023-07-19 05:15:19,584 DEBUG [Listener at localhost/42441-EventThread] zookeeper.ZKWatcher(600): master:392610x0, quorum=127.0.0.1:51693, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-19 05:15:19,585 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): master:39261-0x1017c017df10000 connected 2023-07-19 05:15:19,601 DEBUG [Listener at localhost/42441] zookeeper.ZKUtil(164): master:39261-0x1017c017df10000, quorum=127.0.0.1:51693, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-19 05:15:19,601 DEBUG [Listener at localhost/42441] zookeeper.ZKUtil(164): master:39261-0x1017c017df10000, quorum=127.0.0.1:51693, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-19 05:15:19,601 DEBUG [Listener at localhost/42441] zookeeper.ZKUtil(164): master:39261-0x1017c017df10000, quorum=127.0.0.1:51693, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-19 05:15:19,602 DEBUG [Listener at localhost/42441] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=39261 2023-07-19 05:15:19,603 DEBUG [Listener at localhost/42441] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=39261 2023-07-19 05:15:19,603 DEBUG [Listener at localhost/42441] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=39261 2023-07-19 05:15:19,604 DEBUG [Listener at localhost/42441] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=39261 2023-07-19 05:15:19,604 DEBUG [Listener at localhost/42441] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=39261 2023-07-19 05:15:19,606 INFO [Listener at localhost/42441] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-19 05:15:19,606 INFO [Listener at localhost/42441] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-19 05:15:19,606 INFO [Listener at localhost/42441] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-19 05:15:19,607 INFO [Listener at localhost/42441] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context master 2023-07-19 05:15:19,607 INFO [Listener at localhost/42441] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-19 05:15:19,607 INFO [Listener at localhost/42441] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-19 05:15:19,607 INFO [Listener at localhost/42441] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-19 05:15:19,608 INFO [Listener at localhost/42441] http.HttpServer(1146): Jetty bound to port 44249 2023-07-19 05:15:19,608 INFO [Listener at localhost/42441] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-19 05:15:19,611 INFO [Listener at localhost/42441] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-19 05:15:19,611 INFO [Listener at localhost/42441] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@1194b7d4{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/4e3b5c94-7503-0e39-0800-596c759cd5e5/hadoop.log.dir/,AVAILABLE} 2023-07-19 05:15:19,611 INFO [Listener at localhost/42441] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-19 05:15:19,612 INFO [Listener at localhost/42441] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@56385464{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-19 05:15:19,731 INFO [Listener at localhost/42441] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-19 05:15:19,733 INFO [Listener at localhost/42441] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-19 05:15:19,733 INFO [Listener at localhost/42441] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-19 05:15:19,733 INFO [Listener at localhost/42441] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-19 05:15:19,734 INFO [Listener at localhost/42441] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-19 05:15:19,735 INFO [Listener at localhost/42441] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@73d7f31e{master,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/4e3b5c94-7503-0e39-0800-596c759cd5e5/java.io.tmpdir/jetty-0_0_0_0-44249-hbase-server-2_4_18-SNAPSHOT_jar-_-any-7426302509113633313/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/master} 2023-07-19 05:15:19,737 INFO [Listener at localhost/42441] server.AbstractConnector(333): Started ServerConnector@147da2bf{HTTP/1.1, (http/1.1)}{0.0.0.0:44249} 2023-07-19 05:15:19,737 INFO [Listener at localhost/42441] server.Server(415): Started @44165ms 2023-07-19 05:15:19,737 INFO [Listener at localhost/42441] master.HMaster(444): hbase.rootdir=hdfs://localhost:44175/user/jenkins/test-data/760309a7-bdb9-f7fa-7419-a01ae2b00b2f, hbase.cluster.distributed=false 2023-07-19 05:15:19,751 INFO [Listener at localhost/42441] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-19 05:15:19,751 INFO [Listener at localhost/42441] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-19 05:15:19,751 INFO [Listener at localhost/42441] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-19 05:15:19,751 INFO [Listener at localhost/42441] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-19 05:15:19,751 INFO [Listener at localhost/42441] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-19 05:15:19,752 INFO [Listener at localhost/42441] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-19 05:15:19,752 INFO [Listener at localhost/42441] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-19 05:15:19,752 INFO [Listener at localhost/42441] ipc.NettyRpcServer(120): Bind to /172.31.14.131:46065 2023-07-19 05:15:19,753 INFO [Listener at localhost/42441] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-19 05:15:19,753 DEBUG [Listener at localhost/42441] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-19 05:15:19,754 INFO [Listener at localhost/42441] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-19 05:15:19,755 INFO [Listener at localhost/42441] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-19 05:15:19,756 INFO [Listener at localhost/42441] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:46065 connecting to ZooKeeper ensemble=127.0.0.1:51693 2023-07-19 05:15:19,759 DEBUG [Listener at localhost/42441-EventThread] zookeeper.ZKWatcher(600): regionserver:460650x0, quorum=127.0.0.1:51693, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-19 05:15:19,760 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:46065-0x1017c017df10001 connected 2023-07-19 05:15:19,760 DEBUG [Listener at localhost/42441] zookeeper.ZKUtil(164): regionserver:46065-0x1017c017df10001, quorum=127.0.0.1:51693, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-19 05:15:19,761 DEBUG [Listener at localhost/42441] zookeeper.ZKUtil(164): regionserver:46065-0x1017c017df10001, quorum=127.0.0.1:51693, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-19 05:15:19,761 DEBUG [Listener at localhost/42441] zookeeper.ZKUtil(164): regionserver:46065-0x1017c017df10001, quorum=127.0.0.1:51693, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-19 05:15:19,762 DEBUG [Listener at localhost/42441] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=46065 2023-07-19 05:15:19,762 DEBUG [Listener at localhost/42441] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=46065 2023-07-19 05:15:19,762 DEBUG [Listener at localhost/42441] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=46065 2023-07-19 05:15:19,765 DEBUG [Listener at localhost/42441] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=46065 2023-07-19 05:15:19,765 DEBUG [Listener at localhost/42441] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=46065 2023-07-19 05:15:19,767 INFO [Listener at localhost/42441] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-19 05:15:19,767 INFO [Listener at localhost/42441] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-19 05:15:19,767 INFO [Listener at localhost/42441] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-19 05:15:19,767 INFO [Listener at localhost/42441] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-19 05:15:19,767 INFO [Listener at localhost/42441] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-19 05:15:19,768 INFO [Listener at localhost/42441] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-19 05:15:19,768 INFO [Listener at localhost/42441] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-19 05:15:19,768 INFO [Listener at localhost/42441] http.HttpServer(1146): Jetty bound to port 40853 2023-07-19 05:15:19,768 INFO [Listener at localhost/42441] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-19 05:15:19,771 INFO [Listener at localhost/42441] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-19 05:15:19,771 INFO [Listener at localhost/42441] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@606e99a1{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/4e3b5c94-7503-0e39-0800-596c759cd5e5/hadoop.log.dir/,AVAILABLE} 2023-07-19 05:15:19,772 INFO [Listener at localhost/42441] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-19 05:15:19,772 INFO [Listener at localhost/42441] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@67712218{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-19 05:15:19,883 INFO [Listener at localhost/42441] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-19 05:15:19,884 INFO [Listener at localhost/42441] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-19 05:15:19,884 INFO [Listener at localhost/42441] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-19 05:15:19,884 INFO [Listener at localhost/42441] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-19 05:15:19,885 INFO [Listener at localhost/42441] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-19 05:15:19,886 INFO [Listener at localhost/42441] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@22b53ac5{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/4e3b5c94-7503-0e39-0800-596c759cd5e5/java.io.tmpdir/jetty-0_0_0_0-40853-hbase-server-2_4_18-SNAPSHOT_jar-_-any-5104976354110259630/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-19 05:15:19,887 INFO [Listener at localhost/42441] server.AbstractConnector(333): Started ServerConnector@1e767489{HTTP/1.1, (http/1.1)}{0.0.0.0:40853} 2023-07-19 05:15:19,887 INFO [Listener at localhost/42441] server.Server(415): Started @44315ms 2023-07-19 05:15:19,899 INFO [Listener at localhost/42441] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-19 05:15:19,899 INFO [Listener at localhost/42441] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-19 05:15:19,899 INFO [Listener at localhost/42441] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-19 05:15:19,899 INFO [Listener at localhost/42441] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-19 05:15:19,899 INFO [Listener at localhost/42441] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-19 05:15:19,899 INFO [Listener at localhost/42441] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-19 05:15:19,899 INFO [Listener at localhost/42441] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-19 05:15:19,900 INFO [Listener at localhost/42441] ipc.NettyRpcServer(120): Bind to /172.31.14.131:41955 2023-07-19 05:15:19,900 INFO [Listener at localhost/42441] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-19 05:15:19,904 DEBUG [Listener at localhost/42441] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-19 05:15:19,904 INFO [Listener at localhost/42441] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-19 05:15:19,905 INFO [Listener at localhost/42441] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-19 05:15:19,906 INFO [Listener at localhost/42441] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:41955 connecting to ZooKeeper ensemble=127.0.0.1:51693 2023-07-19 05:15:19,909 DEBUG [Listener at localhost/42441-EventThread] zookeeper.ZKWatcher(600): regionserver:419550x0, quorum=127.0.0.1:51693, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-19 05:15:19,910 DEBUG [Listener at localhost/42441] zookeeper.ZKUtil(164): regionserver:419550x0, quorum=127.0.0.1:51693, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-19 05:15:19,911 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:41955-0x1017c017df10002 connected 2023-07-19 05:15:19,911 DEBUG [Listener at localhost/42441] zookeeper.ZKUtil(164): regionserver:41955-0x1017c017df10002, quorum=127.0.0.1:51693, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-19 05:15:19,912 DEBUG [Listener at localhost/42441] zookeeper.ZKUtil(164): regionserver:41955-0x1017c017df10002, quorum=127.0.0.1:51693, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-19 05:15:19,912 DEBUG [Listener at localhost/42441] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=41955 2023-07-19 05:15:19,912 DEBUG [Listener at localhost/42441] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=41955 2023-07-19 05:15:19,914 DEBUG [Listener at localhost/42441] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=41955 2023-07-19 05:15:19,915 DEBUG [Listener at localhost/42441] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=41955 2023-07-19 05:15:19,915 DEBUG [Listener at localhost/42441] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=41955 2023-07-19 05:15:19,917 INFO [Listener at localhost/42441] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-19 05:15:19,917 INFO [Listener at localhost/42441] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-19 05:15:19,917 INFO [Listener at localhost/42441] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-19 05:15:19,918 INFO [Listener at localhost/42441] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-19 05:15:19,918 INFO [Listener at localhost/42441] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-19 05:15:19,918 INFO [Listener at localhost/42441] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-19 05:15:19,918 INFO [Listener at localhost/42441] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-19 05:15:19,918 INFO [Listener at localhost/42441] http.HttpServer(1146): Jetty bound to port 39657 2023-07-19 05:15:19,918 INFO [Listener at localhost/42441] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-19 05:15:19,923 INFO [Listener at localhost/42441] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-19 05:15:19,924 INFO [Listener at localhost/42441] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@34c9e66a{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/4e3b5c94-7503-0e39-0800-596c759cd5e5/hadoop.log.dir/,AVAILABLE} 2023-07-19 05:15:19,924 INFO [Listener at localhost/42441] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-19 05:15:19,924 INFO [Listener at localhost/42441] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@3c6fadb4{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-19 05:15:20,040 INFO [Listener at localhost/42441] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-19 05:15:20,041 INFO [Listener at localhost/42441] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-19 05:15:20,041 INFO [Listener at localhost/42441] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-19 05:15:20,041 INFO [Listener at localhost/42441] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-19 05:15:20,042 INFO [Listener at localhost/42441] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-19 05:15:20,043 INFO [Listener at localhost/42441] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@53594c28{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/4e3b5c94-7503-0e39-0800-596c759cd5e5/java.io.tmpdir/jetty-0_0_0_0-39657-hbase-server-2_4_18-SNAPSHOT_jar-_-any-1792058455138950368/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-19 05:15:20,045 INFO [Listener at localhost/42441] server.AbstractConnector(333): Started ServerConnector@802ef93{HTTP/1.1, (http/1.1)}{0.0.0.0:39657} 2023-07-19 05:15:20,045 INFO [Listener at localhost/42441] server.Server(415): Started @44473ms 2023-07-19 05:15:20,058 INFO [Listener at localhost/42441] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-19 05:15:20,059 INFO [Listener at localhost/42441] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-19 05:15:20,059 INFO [Listener at localhost/42441] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-19 05:15:20,059 INFO [Listener at localhost/42441] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-19 05:15:20,059 INFO [Listener at localhost/42441] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-19 05:15:20,059 INFO [Listener at localhost/42441] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-19 05:15:20,059 INFO [Listener at localhost/42441] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-19 05:15:20,060 INFO [Listener at localhost/42441] ipc.NettyRpcServer(120): Bind to /172.31.14.131:35969 2023-07-19 05:15:20,060 INFO [Listener at localhost/42441] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-19 05:15:20,062 DEBUG [Listener at localhost/42441] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-19 05:15:20,062 INFO [Listener at localhost/42441] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-19 05:15:20,063 INFO [Listener at localhost/42441] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-19 05:15:20,064 INFO [Listener at localhost/42441] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:35969 connecting to ZooKeeper ensemble=127.0.0.1:51693 2023-07-19 05:15:20,068 DEBUG [Listener at localhost/42441-EventThread] zookeeper.ZKWatcher(600): regionserver:359690x0, quorum=127.0.0.1:51693, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-19 05:15:20,069 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:35969-0x1017c017df10003 connected 2023-07-19 05:15:20,069 DEBUG [Listener at localhost/42441] zookeeper.ZKUtil(164): regionserver:35969-0x1017c017df10003, quorum=127.0.0.1:51693, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-19 05:15:20,070 DEBUG [Listener at localhost/42441] zookeeper.ZKUtil(164): regionserver:35969-0x1017c017df10003, quorum=127.0.0.1:51693, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-19 05:15:20,070 DEBUG [Listener at localhost/42441] zookeeper.ZKUtil(164): regionserver:35969-0x1017c017df10003, quorum=127.0.0.1:51693, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-19 05:15:20,070 DEBUG [Listener at localhost/42441] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=35969 2023-07-19 05:15:20,070 DEBUG [Listener at localhost/42441] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=35969 2023-07-19 05:15:20,073 DEBUG [Listener at localhost/42441] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=35969 2023-07-19 05:15:20,073 DEBUG [Listener at localhost/42441] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=35969 2023-07-19 05:15:20,073 DEBUG [Listener at localhost/42441] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=35969 2023-07-19 05:15:20,075 INFO [Listener at localhost/42441] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-19 05:15:20,075 INFO [Listener at localhost/42441] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-19 05:15:20,075 INFO [Listener at localhost/42441] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-19 05:15:20,076 INFO [Listener at localhost/42441] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-19 05:15:20,076 INFO [Listener at localhost/42441] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-19 05:15:20,076 INFO [Listener at localhost/42441] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-19 05:15:20,076 INFO [Listener at localhost/42441] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-19 05:15:20,076 INFO [Listener at localhost/42441] http.HttpServer(1146): Jetty bound to port 35323 2023-07-19 05:15:20,077 INFO [Listener at localhost/42441] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-19 05:15:20,081 INFO [Listener at localhost/42441] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-19 05:15:20,081 INFO [Listener at localhost/42441] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@10bd5ffd{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/4e3b5c94-7503-0e39-0800-596c759cd5e5/hadoop.log.dir/,AVAILABLE} 2023-07-19 05:15:20,081 INFO [Listener at localhost/42441] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-19 05:15:20,081 INFO [Listener at localhost/42441] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@33d0f78e{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-19 05:15:20,198 INFO [Listener at localhost/42441] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-19 05:15:20,198 INFO [Listener at localhost/42441] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-19 05:15:20,198 INFO [Listener at localhost/42441] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-19 05:15:20,199 INFO [Listener at localhost/42441] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-19 05:15:20,199 INFO [Listener at localhost/42441] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-19 05:15:20,200 INFO [Listener at localhost/42441] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@6bdf1afe{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/4e3b5c94-7503-0e39-0800-596c759cd5e5/java.io.tmpdir/jetty-0_0_0_0-35323-hbase-server-2_4_18-SNAPSHOT_jar-_-any-699344759452204155/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-19 05:15:20,201 INFO [Listener at localhost/42441] server.AbstractConnector(333): Started ServerConnector@478dee5b{HTTP/1.1, (http/1.1)}{0.0.0.0:35323} 2023-07-19 05:15:20,202 INFO [Listener at localhost/42441] server.Server(415): Started @44630ms 2023-07-19 05:15:20,204 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-19 05:15:20,267 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.AbstractConnector(333): Started ServerConnector@4df21cd6{HTTP/1.1, (http/1.1)}{0.0.0.0:36379} 2023-07-19 05:15:20,267 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.Server(415): Started @44695ms 2023-07-19 05:15:20,269 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2168): Adding backup master ZNode /hbase/backup-masters/jenkins-hbase4.apache.org,39261,1689743719572 2023-07-19 05:15:20,273 DEBUG [Listener at localhost/42441-EventThread] zookeeper.ZKWatcher(600): master:39261-0x1017c017df10000, quorum=127.0.0.1:51693, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-07-19 05:15:20,274 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:39261-0x1017c017df10000, quorum=127.0.0.1:51693, baseZNode=/hbase Set watcher on existing znode=/hbase/backup-masters/jenkins-hbase4.apache.org,39261,1689743719572 2023-07-19 05:15:20,275 DEBUG [Listener at localhost/42441-EventThread] zookeeper.ZKWatcher(600): regionserver:46065-0x1017c017df10001, quorum=127.0.0.1:51693, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-19 05:15:20,275 DEBUG [Listener at localhost/42441-EventThread] zookeeper.ZKWatcher(600): regionserver:35969-0x1017c017df10003, quorum=127.0.0.1:51693, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-19 05:15:20,276 DEBUG [Listener at localhost/42441-EventThread] zookeeper.ZKWatcher(600): regionserver:41955-0x1017c017df10002, quorum=127.0.0.1:51693, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-19 05:15:20,276 DEBUG [Listener at localhost/42441-EventThread] zookeeper.ZKWatcher(600): master:39261-0x1017c017df10000, quorum=127.0.0.1:51693, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-19 05:15:20,277 DEBUG [Listener at localhost/42441-EventThread] zookeeper.ZKWatcher(600): master:39261-0x1017c017df10000, quorum=127.0.0.1:51693, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-19 05:15:20,279 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:39261-0x1017c017df10000, quorum=127.0.0.1:51693, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-19 05:15:20,280 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): master:39261-0x1017c017df10000, quorum=127.0.0.1:51693, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-19 05:15:20,280 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(227): Deleting ZNode for /hbase/backup-masters/jenkins-hbase4.apache.org,39261,1689743719572 from backup master directory 2023-07-19 05:15:20,282 DEBUG [Listener at localhost/42441-EventThread] zookeeper.ZKWatcher(600): master:39261-0x1017c017df10000, quorum=127.0.0.1:51693, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/backup-masters/jenkins-hbase4.apache.org,39261,1689743719572 2023-07-19 05:15:20,282 DEBUG [Listener at localhost/42441-EventThread] zookeeper.ZKWatcher(600): master:39261-0x1017c017df10000, quorum=127.0.0.1:51693, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-07-19 05:15:20,282 WARN [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-19 05:15:20,282 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(237): Registered as active master=jenkins-hbase4.apache.org,39261,1689743719572 2023-07-19 05:15:20,302 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] util.FSUtils(620): Created cluster ID file at hdfs://localhost:44175/user/jenkins/test-data/760309a7-bdb9-f7fa-7419-a01ae2b00b2f/hbase.id with ID: 9a8a703c-8c1c-4564-ac22-7d6ba9a86997 2023-07-19 05:15:20,313 INFO [master/jenkins-hbase4:0:becomeActiveMaster] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-19 05:15:20,317 DEBUG [Listener at localhost/42441-EventThread] zookeeper.ZKWatcher(600): master:39261-0x1017c017df10000, quorum=127.0.0.1:51693, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-19 05:15:20,329 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ReadOnlyZKClient(139): Connect 0x0ba13049 to 127.0.0.1:51693 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-19 05:15:20,334 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@55080fea, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-19 05:15:20,334 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(309): Create or load local region for table 'master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-19 05:15:20,334 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(132): Injected flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000 2023-07-19 05:15:20,335 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-19 05:15:20,336 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7693): Creating {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, under table dir hdfs://localhost:44175/user/jenkins/test-data/760309a7-bdb9-f7fa-7419-a01ae2b00b2f/MasterData/data/master/store-tmp 2023-07-19 05:15:20,345 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-19 05:15:20,345 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-07-19 05:15:20,345 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-19 05:15:20,345 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-19 05:15:20,345 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-07-19 05:15:20,345 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-19 05:15:20,345 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-19 05:15:20,345 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-19 05:15:20,346 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(191): WALDir=hdfs://localhost:44175/user/jenkins/test-data/760309a7-bdb9-f7fa-7419-a01ae2b00b2f/MasterData/WALs/jenkins-hbase4.apache.org,39261,1689743719572 2023-07-19 05:15:20,349 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C39261%2C1689743719572, suffix=, logDir=hdfs://localhost:44175/user/jenkins/test-data/760309a7-bdb9-f7fa-7419-a01ae2b00b2f/MasterData/WALs/jenkins-hbase4.apache.org,39261,1689743719572, archiveDir=hdfs://localhost:44175/user/jenkins/test-data/760309a7-bdb9-f7fa-7419-a01ae2b00b2f/MasterData/oldWALs, maxLogs=10 2023-07-19 05:15:20,365 DEBUG [RS-EventLoopGroup-15-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:35727,DS-f3cc29e5-9316-4bf7-8e06-c992433b44f8,DISK] 2023-07-19 05:15:20,365 DEBUG [RS-EventLoopGroup-15-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:38475,DS-74be87a0-13f3-435a-adac-f9bd6c929921,DISK] 2023-07-19 05:15:20,365 DEBUG [RS-EventLoopGroup-15-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:41985,DS-345d74be-872e-4d52-9cb2-51f897ffa631,DISK] 2023-07-19 05:15:20,371 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/760309a7-bdb9-f7fa-7419-a01ae2b00b2f/MasterData/WALs/jenkins-hbase4.apache.org,39261,1689743719572/jenkins-hbase4.apache.org%2C39261%2C1689743719572.1689743720349 2023-07-19 05:15:20,371 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:35727,DS-f3cc29e5-9316-4bf7-8e06-c992433b44f8,DISK], DatanodeInfoWithStorage[127.0.0.1:38475,DS-74be87a0-13f3-435a-adac-f9bd6c929921,DISK], DatanodeInfoWithStorage[127.0.0.1:41985,DS-345d74be-872e-4d52-9cb2-51f897ffa631,DISK]] 2023-07-19 05:15:20,371 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7854): Opening region: {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''} 2023-07-19 05:15:20,371 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-19 05:15:20,372 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7894): checking encryption for 1595e783b53d99cd5eef43b6debb2682 2023-07-19 05:15:20,372 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7897): checking classloading for 1595e783b53d99cd5eef43b6debb2682 2023-07-19 05:15:20,374 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family proc of region 1595e783b53d99cd5eef43b6debb2682 2023-07-19 05:15:20,376 DEBUG [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44175/user/jenkins/test-data/760309a7-bdb9-f7fa-7419-a01ae2b00b2f/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc 2023-07-19 05:15:20,376 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1595e783b53d99cd5eef43b6debb2682 columnFamilyName proc 2023-07-19 05:15:20,377 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(310): Store=1595e783b53d99cd5eef43b6debb2682/proc, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-19 05:15:20,378 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44175/user/jenkins/test-data/760309a7-bdb9-f7fa-7419-a01ae2b00b2f/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-07-19 05:15:20,378 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44175/user/jenkins/test-data/760309a7-bdb9-f7fa-7419-a01ae2b00b2f/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-07-19 05:15:20,381 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1055): writing seq id for 1595e783b53d99cd5eef43b6debb2682 2023-07-19 05:15:20,386 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44175/user/jenkins/test-data/760309a7-bdb9-f7fa-7419-a01ae2b00b2f/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-19 05:15:20,386 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1072): Opened 1595e783b53d99cd5eef43b6debb2682; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10463471040, jitterRate=-0.02551332116127014}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-19 05:15:20,387 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(965): Region open journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-19 05:15:20,387 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(122): Constructor flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000, compactMin=4 2023-07-19 05:15:20,388 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(104): Starting the Region Procedure Store, number threads=5 2023-07-19 05:15:20,388 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(562): Starting 5 core workers (bigger of cpus/4 or 16) with max (burst) worker count=50 2023-07-19 05:15:20,389 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(255): Starting Region Procedure Store lease recovery... 2023-07-19 05:15:20,389 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(582): Recovered RegionProcedureStore lease in 0 msec 2023-07-19 05:15:20,389 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(596): Loaded RegionProcedureStore in 0 msec 2023-07-19 05:15:20,389 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.RemoteProcedureDispatcher(96): Instantiated, coreThreads=3 (allowCoreThreadTimeOut=true), queueMaxSize=32, operationDelay=150 2023-07-19 05:15:20,391 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(253): hbase:meta replica znodes: [] 2023-07-19 05:15:20,392 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.RegionServerTracker(124): Starting RegionServerTracker; 0 have existing ServerCrashProcedures, 0 possibly 'live' servers, and 0 'splitting'. 2023-07-19 05:15:20,393 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:39261-0x1017c017df10000, quorum=127.0.0.1:51693, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/balancer 2023-07-19 05:15:20,393 INFO [master/jenkins-hbase4:0:becomeActiveMaster] normalizer.RegionNormalizerWorker(118): Normalizer rate limit set to unlimited 2023-07-19 05:15:20,393 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:39261-0x1017c017df10000, quorum=127.0.0.1:51693, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/normalizer 2023-07-19 05:15:20,396 DEBUG [Listener at localhost/42441-EventThread] zookeeper.ZKWatcher(600): master:39261-0x1017c017df10000, quorum=127.0.0.1:51693, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-19 05:15:20,397 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:39261-0x1017c017df10000, quorum=127.0.0.1:51693, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/split 2023-07-19 05:15:20,397 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:39261-0x1017c017df10000, quorum=127.0.0.1:51693, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/merge 2023-07-19 05:15:20,398 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:39261-0x1017c017df10000, quorum=127.0.0.1:51693, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/snapshot-cleanup 2023-07-19 05:15:20,399 DEBUG [Listener at localhost/42441-EventThread] zookeeper.ZKWatcher(600): regionserver:46065-0x1017c017df10001, quorum=127.0.0.1:51693, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-19 05:15:20,399 DEBUG [Listener at localhost/42441-EventThread] zookeeper.ZKWatcher(600): regionserver:41955-0x1017c017df10002, quorum=127.0.0.1:51693, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-19 05:15:20,399 DEBUG [Listener at localhost/42441-EventThread] zookeeper.ZKWatcher(600): master:39261-0x1017c017df10000, quorum=127.0.0.1:51693, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-19 05:15:20,399 DEBUG [Listener at localhost/42441-EventThread] zookeeper.ZKWatcher(600): master:39261-0x1017c017df10000, quorum=127.0.0.1:51693, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-19 05:15:20,400 DEBUG [Listener at localhost/42441-EventThread] zookeeper.ZKWatcher(600): regionserver:35969-0x1017c017df10003, quorum=127.0.0.1:51693, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-19 05:15:20,402 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(744): Active/primary master=jenkins-hbase4.apache.org,39261,1689743719572, sessionid=0x1017c017df10000, setting cluster-up flag (Was=false) 2023-07-19 05:15:20,408 DEBUG [Listener at localhost/42441-EventThread] zookeeper.ZKWatcher(600): master:39261-0x1017c017df10000, quorum=127.0.0.1:51693, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-19 05:15:20,413 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/flush-table-proc/acquired, /hbase/flush-table-proc/reached, /hbase/flush-table-proc/abort 2023-07-19 05:15:20,414 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,39261,1689743719572 2023-07-19 05:15:20,417 DEBUG [Listener at localhost/42441-EventThread] zookeeper.ZKWatcher(600): master:39261-0x1017c017df10000, quorum=127.0.0.1:51693, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-19 05:15:20,425 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/online-snapshot/acquired, /hbase/online-snapshot/reached, /hbase/online-snapshot/abort 2023-07-19 05:15:20,426 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,39261,1689743719572 2023-07-19 05:15:20,426 WARN [master/jenkins-hbase4:0:becomeActiveMaster] snapshot.SnapshotManager(302): Couldn't delete working snapshot directory: hdfs://localhost:44175/user/jenkins/test-data/760309a7-bdb9-f7fa-7419-a01ae2b00b2f/.hbase-snapshot/.tmp 2023-07-19 05:15:20,427 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2930): Registered master coprocessor service: service=RSGroupAdminService 2023-07-19 05:15:20,427 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] rsgroup.RSGroupInfoManagerImpl(537): Refreshing in Offline mode. 2023-07-19 05:15:20,428 INFO [master/jenkins-hbase4:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint loaded, priority=536870911. 2023-07-19 05:15:20,429 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,39261,1689743719572] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-19 05:15:20,429 INFO [master/jenkins-hbase4:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver loaded, priority=536870912. 2023-07-19 05:15:20,430 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT; InitMetaProcedure table=hbase:meta 2023-07-19 05:15:20,447 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-07-19 05:15:20,447 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-07-19 05:15:20,447 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-07-19 05:15:20,447 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-07-19 05:15:20,447 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_OPEN_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-19 05:15:20,447 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_CLOSE_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-19 05:15:20,447 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-19 05:15:20,447 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_META_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-19 05:15:20,447 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=M_LOG_REPLAY_OPS-master/jenkins-hbase4:0, corePoolSize=10, maxPoolSize=10 2023-07-19 05:15:20,448 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-19 05:15:20,448 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_MERGE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-19 05:15:20,448 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_TABLE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-19 05:15:20,450 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner; timeout=30000, timestamp=1689743750450 2023-07-19 05:15:20,450 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): log_cleaner Cleaner pool size is 1 2023-07-19 05:15:20,450 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveLogCleaner 2023-07-19 05:15:20,450 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.replication.master.ReplicationLogCleaner 2023-07-19 05:15:20,450 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreWALCleaner 2023-07-19 05:15:20,450 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveProcedureWALCleaner 2023-07-19 05:15:20,450 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.LogCleaner(148): Creating 1 old WALs cleaner threads 2023-07-19 05:15:20,451 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=LogsCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-07-19 05:15:20,451 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT, locked=true; InitMetaProcedure table=hbase:meta 2023-07-19 05:15:20,451 INFO [PEWorker-1] procedure.InitMetaProcedure(71): BOOTSTRAP: creating hbase:meta region 2023-07-19 05:15:20,452 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): hfile_cleaner Cleaner pool size is 2 2023-07-19 05:15:20,452 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreHFileCleaner 2023-07-19 05:15:20,452 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.HFileLinkCleaner 2023-07-19 05:15:20,452 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.snapshot.SnapshotHFileCleaner 2023-07-19 05:15:20,452 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveHFileCleaner 2023-07-19 05:15:20,452 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(242): Starting for large file=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1689743720452,5,FailOnTimeoutGroup] 2023-07-19 05:15:20,452 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(257): Starting for small files=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1689743720452,5,FailOnTimeoutGroup] 2023-07-19 05:15:20,452 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HFileCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-07-19 05:15:20,453 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1461): Reopening regions with very high storeFileRefCount is disabled. Provide threshold value > 0 for hbase.regions.recovery.store.file.ref.count to enable it. 2023-07-19 05:15:20,453 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=ReplicationBarrierCleaner, period=43200000, unit=MILLISECONDS is enabled. 2023-07-19 05:15:20,453 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=SnapshotCleaner, period=1800000, unit=MILLISECONDS is enabled. 2023-07-19 05:15:20,453 INFO [PEWorker-1] util.FSTableDescriptors(128): Creating new hbase:meta table descriptor 'hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-07-19 05:15:20,472 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost:44175/user/jenkins/test-data/760309a7-bdb9-f7fa-7419-a01ae2b00b2f/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-07-19 05:15:20,473 INFO [PEWorker-1] util.FSTableDescriptors(135): Updated hbase:meta table descriptor to hdfs://localhost:44175/user/jenkins/test-data/760309a7-bdb9-f7fa-7419-a01ae2b00b2f/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-07-19 05:15:20,473 INFO [PEWorker-1] regionserver.HRegion(7675): creating {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:44175/user/jenkins/test-data/760309a7-bdb9-f7fa-7419-a01ae2b00b2f 2023-07-19 05:15:20,486 DEBUG [PEWorker-1] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-19 05:15:20,487 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-19 05:15:20,489 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44175/user/jenkins/test-data/760309a7-bdb9-f7fa-7419-a01ae2b00b2f/data/hbase/meta/1588230740/info 2023-07-19 05:15:20,490 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-19 05:15:20,490 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-19 05:15:20,490 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-19 05:15:20,492 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44175/user/jenkins/test-data/760309a7-bdb9-f7fa-7419-a01ae2b00b2f/data/hbase/meta/1588230740/rep_barrier 2023-07-19 05:15:20,492 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-19 05:15:20,493 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-19 05:15:20,493 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-19 05:15:20,494 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44175/user/jenkins/test-data/760309a7-bdb9-f7fa-7419-a01ae2b00b2f/data/hbase/meta/1588230740/table 2023-07-19 05:15:20,495 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-19 05:15:20,495 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-19 05:15:20,496 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44175/user/jenkins/test-data/760309a7-bdb9-f7fa-7419-a01ae2b00b2f/data/hbase/meta/1588230740 2023-07-19 05:15:20,496 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44175/user/jenkins/test-data/760309a7-bdb9-f7fa-7419-a01ae2b00b2f/data/hbase/meta/1588230740 2023-07-19 05:15:20,499 DEBUG [PEWorker-1] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-19 05:15:20,500 DEBUG [PEWorker-1] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-19 05:15:20,502 DEBUG [PEWorker-1] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44175/user/jenkins/test-data/760309a7-bdb9-f7fa-7419-a01ae2b00b2f/data/hbase/meta/1588230740/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-19 05:15:20,504 INFO [PEWorker-1] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11618938400, jitterRate=0.08209796249866486}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-19 05:15:20,504 DEBUG [PEWorker-1] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-19 05:15:20,504 DEBUG [PEWorker-1] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-19 05:15:20,504 INFO [PEWorker-1] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-19 05:15:20,504 DEBUG [PEWorker-1] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-19 05:15:20,504 DEBUG [PEWorker-1] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-19 05:15:20,504 DEBUG [PEWorker-1] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-19 05:15:20,504 INFO [RS:0;jenkins-hbase4:46065] regionserver.HRegionServer(951): ClusterId : 9a8a703c-8c1c-4564-ac22-7d6ba9a86997 2023-07-19 05:15:20,506 DEBUG [RS:0;jenkins-hbase4:46065] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-19 05:15:20,506 INFO [PEWorker-1] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-19 05:15:20,506 INFO [RS:1;jenkins-hbase4:41955] regionserver.HRegionServer(951): ClusterId : 9a8a703c-8c1c-4564-ac22-7d6ba9a86997 2023-07-19 05:15:20,506 INFO [RS:2;jenkins-hbase4:35969] regionserver.HRegionServer(951): ClusterId : 9a8a703c-8c1c-4564-ac22-7d6ba9a86997 2023-07-19 05:15:20,506 DEBUG [PEWorker-1] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-19 05:15:20,509 DEBUG [RS:2;jenkins-hbase4:35969] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-19 05:15:20,508 DEBUG [RS:1;jenkins-hbase4:41955] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-19 05:15:20,510 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_ASSIGN_META, locked=true; InitMetaProcedure table=hbase:meta 2023-07-19 05:15:20,510 INFO [PEWorker-1] procedure.InitMetaProcedure(103): Going to assign meta 2023-07-19 05:15:20,510 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN}] 2023-07-19 05:15:20,511 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN 2023-07-19 05:15:20,512 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN; state=OFFLINE, location=null; forceNewPlan=false, retain=false 2023-07-19 05:15:20,513 DEBUG [RS:0;jenkins-hbase4:46065] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-19 05:15:20,513 DEBUG [RS:0;jenkins-hbase4:46065] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-19 05:15:20,513 DEBUG [RS:2;jenkins-hbase4:35969] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-19 05:15:20,513 DEBUG [RS:2;jenkins-hbase4:35969] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-19 05:15:20,513 DEBUG [RS:1;jenkins-hbase4:41955] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-19 05:15:20,513 DEBUG [RS:1;jenkins-hbase4:41955] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-19 05:15:20,516 DEBUG [RS:0;jenkins-hbase4:46065] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-19 05:15:20,518 DEBUG [RS:1;jenkins-hbase4:41955] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-19 05:15:20,518 DEBUG [RS:0;jenkins-hbase4:46065] zookeeper.ReadOnlyZKClient(139): Connect 0x290a981c to 127.0.0.1:51693 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-19 05:15:20,519 DEBUG [RS:2;jenkins-hbase4:35969] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-19 05:15:20,521 DEBUG [RS:2;jenkins-hbase4:35969] zookeeper.ReadOnlyZKClient(139): Connect 0x27fae21c to 127.0.0.1:51693 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-19 05:15:20,522 DEBUG [RS:1;jenkins-hbase4:41955] zookeeper.ReadOnlyZKClient(139): Connect 0x2aae53b8 to 127.0.0.1:51693 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-19 05:15:20,530 DEBUG [RS:0;jenkins-hbase4:46065] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@27cd634d, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-19 05:15:20,532 DEBUG [RS:0;jenkins-hbase4:46065] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@23f2f36c, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-19 05:15:20,535 DEBUG [RS:2;jenkins-hbase4:35969] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@4f488f17, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-19 05:15:20,535 DEBUG [RS:1;jenkins-hbase4:41955] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@6a60abc7, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-19 05:15:20,536 DEBUG [RS:2;jenkins-hbase4:35969] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@1617c009, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-19 05:15:20,536 DEBUG [RS:1;jenkins-hbase4:41955] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@1bceb849, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-19 05:15:20,544 DEBUG [RS:1;jenkins-hbase4:41955] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:1;jenkins-hbase4:41955 2023-07-19 05:15:20,544 INFO [RS:1;jenkins-hbase4:41955] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-19 05:15:20,544 INFO [RS:1;jenkins-hbase4:41955] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-19 05:15:20,544 DEBUG [RS:1;jenkins-hbase4:41955] regionserver.HRegionServer(1022): About to register with Master. 2023-07-19 05:15:20,544 DEBUG [RS:0;jenkins-hbase4:46065] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:0;jenkins-hbase4:46065 2023-07-19 05:15:20,544 INFO [RS:0;jenkins-hbase4:46065] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-19 05:15:20,544 INFO [RS:0;jenkins-hbase4:46065] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-19 05:15:20,544 INFO [RS:1;jenkins-hbase4:41955] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,39261,1689743719572 with isa=jenkins-hbase4.apache.org/172.31.14.131:41955, startcode=1689743719898 2023-07-19 05:15:20,544 DEBUG [RS:0;jenkins-hbase4:46065] regionserver.HRegionServer(1022): About to register with Master. 2023-07-19 05:15:20,545 DEBUG [RS:1;jenkins-hbase4:41955] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-19 05:15:20,545 INFO [RS:0;jenkins-hbase4:46065] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,39261,1689743719572 with isa=jenkins-hbase4.apache.org/172.31.14.131:46065, startcode=1689743719751 2023-07-19 05:15:20,545 DEBUG [RS:0;jenkins-hbase4:46065] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-19 05:15:20,546 INFO [RS-EventLoopGroup-12-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:44867, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.8 (auth:SIMPLE), service=RegionServerStatusService 2023-07-19 05:15:20,548 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=39261] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,41955,1689743719898 2023-07-19 05:15:20,548 INFO [RS-EventLoopGroup-12-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:46919, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.7 (auth:SIMPLE), service=RegionServerStatusService 2023-07-19 05:15:20,548 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,39261,1689743719572] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-19 05:15:20,548 DEBUG [RS:2;jenkins-hbase4:35969] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:2;jenkins-hbase4:35969 2023-07-19 05:15:20,549 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,39261,1689743719572] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 1 2023-07-19 05:15:20,549 INFO [RS:2;jenkins-hbase4:35969] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-19 05:15:20,549 INFO [RS:2;jenkins-hbase4:35969] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-19 05:15:20,549 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=39261] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,46065,1689743719751 2023-07-19 05:15:20,549 DEBUG [RS:2;jenkins-hbase4:35969] regionserver.HRegionServer(1022): About to register with Master. 2023-07-19 05:15:20,549 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,39261,1689743719572] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-19 05:15:20,549 DEBUG [RS:1;jenkins-hbase4:41955] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:44175/user/jenkins/test-data/760309a7-bdb9-f7fa-7419-a01ae2b00b2f 2023-07-19 05:15:20,549 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,39261,1689743719572] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 2 2023-07-19 05:15:20,549 DEBUG [RS:1;jenkins-hbase4:41955] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:44175 2023-07-19 05:15:20,549 DEBUG [RS:1;jenkins-hbase4:41955] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=44249 2023-07-19 05:15:20,549 DEBUG [RS:0;jenkins-hbase4:46065] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:44175/user/jenkins/test-data/760309a7-bdb9-f7fa-7419-a01ae2b00b2f 2023-07-19 05:15:20,550 DEBUG [RS:0;jenkins-hbase4:46065] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:44175 2023-07-19 05:15:20,550 DEBUG [RS:0;jenkins-hbase4:46065] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=44249 2023-07-19 05:15:20,550 INFO [RS:2;jenkins-hbase4:35969] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,39261,1689743719572 with isa=jenkins-hbase4.apache.org/172.31.14.131:35969, startcode=1689743720058 2023-07-19 05:15:20,550 DEBUG [RS:2;jenkins-hbase4:35969] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-19 05:15:20,551 DEBUG [Listener at localhost/42441-EventThread] zookeeper.ZKWatcher(600): master:39261-0x1017c017df10000, quorum=127.0.0.1:51693, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-19 05:15:20,551 INFO [RS-EventLoopGroup-12-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:39873, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.9 (auth:SIMPLE), service=RegionServerStatusService 2023-07-19 05:15:20,551 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=39261] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,35969,1689743720058 2023-07-19 05:15:20,551 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,39261,1689743719572] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-19 05:15:20,552 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,39261,1689743719572] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 3 2023-07-19 05:15:20,552 DEBUG [RS:2;jenkins-hbase4:35969] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:44175/user/jenkins/test-data/760309a7-bdb9-f7fa-7419-a01ae2b00b2f 2023-07-19 05:15:20,552 DEBUG [RS:2;jenkins-hbase4:35969] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:44175 2023-07-19 05:15:20,552 DEBUG [RS:2;jenkins-hbase4:35969] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=44249 2023-07-19 05:15:20,557 DEBUG [RS:1;jenkins-hbase4:41955] zookeeper.ZKUtil(162): regionserver:41955-0x1017c017df10002, quorum=127.0.0.1:51693, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,41955,1689743719898 2023-07-19 05:15:20,557 DEBUG [RS:0;jenkins-hbase4:46065] zookeeper.ZKUtil(162): regionserver:46065-0x1017c017df10001, quorum=127.0.0.1:51693, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,46065,1689743719751 2023-07-19 05:15:20,557 WARN [RS:1;jenkins-hbase4:41955] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-19 05:15:20,557 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,35969,1689743720058] 2023-07-19 05:15:20,557 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,46065,1689743719751] 2023-07-19 05:15:20,558 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,41955,1689743719898] 2023-07-19 05:15:20,558 DEBUG [RS:2;jenkins-hbase4:35969] zookeeper.ZKUtil(162): regionserver:35969-0x1017c017df10003, quorum=127.0.0.1:51693, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,35969,1689743720058 2023-07-19 05:15:20,557 INFO [RS:1;jenkins-hbase4:41955] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-19 05:15:20,558 WARN [RS:2;jenkins-hbase4:35969] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-19 05:15:20,558 DEBUG [RS:1;jenkins-hbase4:41955] regionserver.HRegionServer(1948): logDir=hdfs://localhost:44175/user/jenkins/test-data/760309a7-bdb9-f7fa-7419-a01ae2b00b2f/WALs/jenkins-hbase4.apache.org,41955,1689743719898 2023-07-19 05:15:20,558 INFO [RS:2;jenkins-hbase4:35969] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-19 05:15:20,557 WARN [RS:0;jenkins-hbase4:46065] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-19 05:15:20,558 DEBUG [RS:2;jenkins-hbase4:35969] regionserver.HRegionServer(1948): logDir=hdfs://localhost:44175/user/jenkins/test-data/760309a7-bdb9-f7fa-7419-a01ae2b00b2f/WALs/jenkins-hbase4.apache.org,35969,1689743720058 2023-07-19 05:15:20,558 INFO [RS:0;jenkins-hbase4:46065] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-19 05:15:20,558 DEBUG [RS:0;jenkins-hbase4:46065] regionserver.HRegionServer(1948): logDir=hdfs://localhost:44175/user/jenkins/test-data/760309a7-bdb9-f7fa-7419-a01ae2b00b2f/WALs/jenkins-hbase4.apache.org,46065,1689743719751 2023-07-19 05:15:20,570 DEBUG [RS:1;jenkins-hbase4:41955] zookeeper.ZKUtil(162): regionserver:41955-0x1017c017df10002, quorum=127.0.0.1:51693, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,46065,1689743719751 2023-07-19 05:15:20,570 DEBUG [RS:1;jenkins-hbase4:41955] zookeeper.ZKUtil(162): regionserver:41955-0x1017c017df10002, quorum=127.0.0.1:51693, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,41955,1689743719898 2023-07-19 05:15:20,571 DEBUG [RS:1;jenkins-hbase4:41955] zookeeper.ZKUtil(162): regionserver:41955-0x1017c017df10002, quorum=127.0.0.1:51693, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,35969,1689743720058 2023-07-19 05:15:20,572 DEBUG [RS:0;jenkins-hbase4:46065] zookeeper.ZKUtil(162): regionserver:46065-0x1017c017df10001, quorum=127.0.0.1:51693, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,46065,1689743719751 2023-07-19 05:15:20,572 DEBUG [RS:2;jenkins-hbase4:35969] zookeeper.ZKUtil(162): regionserver:35969-0x1017c017df10003, quorum=127.0.0.1:51693, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,46065,1689743719751 2023-07-19 05:15:20,572 DEBUG [RS:1;jenkins-hbase4:41955] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-19 05:15:20,572 DEBUG [RS:0;jenkins-hbase4:46065] zookeeper.ZKUtil(162): regionserver:46065-0x1017c017df10001, quorum=127.0.0.1:51693, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,41955,1689743719898 2023-07-19 05:15:20,573 DEBUG [RS:2;jenkins-hbase4:35969] zookeeper.ZKUtil(162): regionserver:35969-0x1017c017df10003, quorum=127.0.0.1:51693, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,41955,1689743719898 2023-07-19 05:15:20,573 INFO [RS:1;jenkins-hbase4:41955] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-19 05:15:20,573 DEBUG [RS:0;jenkins-hbase4:46065] zookeeper.ZKUtil(162): regionserver:46065-0x1017c017df10001, quorum=127.0.0.1:51693, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,35969,1689743720058 2023-07-19 05:15:20,573 DEBUG [RS:2;jenkins-hbase4:35969] zookeeper.ZKUtil(162): regionserver:35969-0x1017c017df10003, quorum=127.0.0.1:51693, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,35969,1689743720058 2023-07-19 05:15:20,574 DEBUG [RS:0;jenkins-hbase4:46065] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-19 05:15:20,574 INFO [RS:0;jenkins-hbase4:46065] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-19 05:15:20,574 DEBUG [RS:2;jenkins-hbase4:35969] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-19 05:15:20,574 INFO [RS:1;jenkins-hbase4:41955] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-19 05:15:20,575 INFO [RS:1;jenkins-hbase4:41955] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-19 05:15:20,575 INFO [RS:1;jenkins-hbase4:41955] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-19 05:15:20,575 INFO [RS:1;jenkins-hbase4:41955] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-19 05:15:20,576 INFO [RS:2;jenkins-hbase4:35969] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-19 05:15:20,576 INFO [RS:1;jenkins-hbase4:41955] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-19 05:15:20,576 INFO [RS:0;jenkins-hbase4:46065] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-19 05:15:20,577 DEBUG [RS:1;jenkins-hbase4:41955] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-19 05:15:20,577 DEBUG [RS:1;jenkins-hbase4:41955] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-19 05:15:20,577 DEBUG [RS:1;jenkins-hbase4:41955] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-19 05:15:20,577 DEBUG [RS:1;jenkins-hbase4:41955] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-19 05:15:20,578 DEBUG [RS:1;jenkins-hbase4:41955] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-19 05:15:20,578 DEBUG [RS:1;jenkins-hbase4:41955] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-19 05:15:20,578 DEBUG [RS:1;jenkins-hbase4:41955] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-19 05:15:20,578 DEBUG [RS:1;jenkins-hbase4:41955] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-19 05:15:20,578 DEBUG [RS:1;jenkins-hbase4:41955] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-19 05:15:20,578 DEBUG [RS:1;jenkins-hbase4:41955] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-19 05:15:20,578 INFO [RS:0;jenkins-hbase4:46065] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-19 05:15:20,579 INFO [RS:2;jenkins-hbase4:35969] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-19 05:15:20,579 INFO [RS:0;jenkins-hbase4:46065] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-19 05:15:20,584 INFO [RS:1;jenkins-hbase4:41955] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-19 05:15:20,584 INFO [RS:2;jenkins-hbase4:35969] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-19 05:15:20,584 INFO [RS:1;jenkins-hbase4:41955] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-19 05:15:20,584 INFO [RS:0;jenkins-hbase4:46065] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-19 05:15:20,584 INFO [RS:2;jenkins-hbase4:35969] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-19 05:15:20,584 INFO [RS:1;jenkins-hbase4:41955] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-19 05:15:20,589 INFO [RS:2;jenkins-hbase4:35969] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-19 05:15:20,589 INFO [RS:0;jenkins-hbase4:46065] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-19 05:15:20,590 DEBUG [RS:0;jenkins-hbase4:46065] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-19 05:15:20,591 DEBUG [RS:0;jenkins-hbase4:46065] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-19 05:15:20,591 DEBUG [RS:0;jenkins-hbase4:46065] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-19 05:15:20,591 DEBUG [RS:0;jenkins-hbase4:46065] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-19 05:15:20,591 DEBUG [RS:0;jenkins-hbase4:46065] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-19 05:15:20,591 DEBUG [RS:0;jenkins-hbase4:46065] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-19 05:15:20,591 DEBUG [RS:0;jenkins-hbase4:46065] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-19 05:15:20,591 DEBUG [RS:0;jenkins-hbase4:46065] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-19 05:15:20,591 DEBUG [RS:0;jenkins-hbase4:46065] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-19 05:15:20,591 DEBUG [RS:0;jenkins-hbase4:46065] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-19 05:15:20,591 INFO [RS:2;jenkins-hbase4:35969] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-19 05:15:20,591 DEBUG [RS:2;jenkins-hbase4:35969] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-19 05:15:20,591 DEBUG [RS:2;jenkins-hbase4:35969] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-19 05:15:20,591 DEBUG [RS:2;jenkins-hbase4:35969] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-19 05:15:20,592 DEBUG [RS:2;jenkins-hbase4:35969] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-19 05:15:20,592 DEBUG [RS:2;jenkins-hbase4:35969] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-19 05:15:20,592 DEBUG [RS:2;jenkins-hbase4:35969] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-19 05:15:20,592 DEBUG [RS:2;jenkins-hbase4:35969] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-19 05:15:20,592 DEBUG [RS:2;jenkins-hbase4:35969] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-19 05:15:20,592 DEBUG [RS:2;jenkins-hbase4:35969] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-19 05:15:20,592 DEBUG [RS:2;jenkins-hbase4:35969] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-19 05:15:20,595 INFO [RS:0;jenkins-hbase4:46065] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-19 05:15:20,595 INFO [RS:0;jenkins-hbase4:46065] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-19 05:15:20,595 INFO [RS:0;jenkins-hbase4:46065] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-19 05:15:20,597 INFO [RS:2;jenkins-hbase4:35969] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-19 05:15:20,597 INFO [RS:2;jenkins-hbase4:35969] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-19 05:15:20,599 INFO [RS:2;jenkins-hbase4:35969] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-19 05:15:20,610 INFO [RS:1;jenkins-hbase4:41955] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-19 05:15:20,610 INFO [RS:1;jenkins-hbase4:41955] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,41955,1689743719898-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-19 05:15:20,615 INFO [RS:0;jenkins-hbase4:46065] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-19 05:15:20,615 INFO [RS:2;jenkins-hbase4:35969] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-19 05:15:20,615 INFO [RS:0;jenkins-hbase4:46065] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,46065,1689743719751-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-19 05:15:20,615 INFO [RS:2;jenkins-hbase4:35969] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,35969,1689743720058-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-19 05:15:20,630 INFO [RS:0;jenkins-hbase4:46065] regionserver.Replication(203): jenkins-hbase4.apache.org,46065,1689743719751 started 2023-07-19 05:15:20,631 INFO [RS:1;jenkins-hbase4:41955] regionserver.Replication(203): jenkins-hbase4.apache.org,41955,1689743719898 started 2023-07-19 05:15:20,631 INFO [RS:0;jenkins-hbase4:46065] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,46065,1689743719751, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:46065, sessionid=0x1017c017df10001 2023-07-19 05:15:20,631 INFO [RS:1;jenkins-hbase4:41955] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,41955,1689743719898, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:41955, sessionid=0x1017c017df10002 2023-07-19 05:15:20,631 DEBUG [RS:0;jenkins-hbase4:46065] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-19 05:15:20,631 DEBUG [RS:0;jenkins-hbase4:46065] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,46065,1689743719751 2023-07-19 05:15:20,631 DEBUG [RS:0;jenkins-hbase4:46065] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,46065,1689743719751' 2023-07-19 05:15:20,631 DEBUG [RS:0;jenkins-hbase4:46065] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-19 05:15:20,631 DEBUG [RS:1;jenkins-hbase4:41955] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-19 05:15:20,631 DEBUG [RS:1;jenkins-hbase4:41955] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,41955,1689743719898 2023-07-19 05:15:20,631 DEBUG [RS:1;jenkins-hbase4:41955] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,41955,1689743719898' 2023-07-19 05:15:20,631 DEBUG [RS:1;jenkins-hbase4:41955] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-19 05:15:20,631 DEBUG [RS:0;jenkins-hbase4:46065] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-19 05:15:20,631 DEBUG [RS:1;jenkins-hbase4:41955] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-19 05:15:20,632 DEBUG [RS:1;jenkins-hbase4:41955] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-19 05:15:20,632 DEBUG [RS:0;jenkins-hbase4:46065] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-19 05:15:20,632 DEBUG [RS:0;jenkins-hbase4:46065] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-19 05:15:20,632 DEBUG [RS:1;jenkins-hbase4:41955] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-19 05:15:20,632 DEBUG [RS:0;jenkins-hbase4:46065] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,46065,1689743719751 2023-07-19 05:15:20,632 DEBUG [RS:0;jenkins-hbase4:46065] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,46065,1689743719751' 2023-07-19 05:15:20,632 DEBUG [RS:0;jenkins-hbase4:46065] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-19 05:15:20,632 DEBUG [RS:1;jenkins-hbase4:41955] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,41955,1689743719898 2023-07-19 05:15:20,632 DEBUG [RS:1;jenkins-hbase4:41955] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,41955,1689743719898' 2023-07-19 05:15:20,632 DEBUG [RS:1;jenkins-hbase4:41955] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-19 05:15:20,632 DEBUG [RS:0;jenkins-hbase4:46065] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-19 05:15:20,632 INFO [RS:2;jenkins-hbase4:35969] regionserver.Replication(203): jenkins-hbase4.apache.org,35969,1689743720058 started 2023-07-19 05:15:20,633 DEBUG [RS:0;jenkins-hbase4:46065] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-19 05:15:20,633 INFO [RS:2;jenkins-hbase4:35969] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,35969,1689743720058, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:35969, sessionid=0x1017c017df10003 2023-07-19 05:15:20,632 DEBUG [RS:1;jenkins-hbase4:41955] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-19 05:15:20,633 DEBUG [RS:2;jenkins-hbase4:35969] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-19 05:15:20,633 DEBUG [RS:2;jenkins-hbase4:35969] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,35969,1689743720058 2023-07-19 05:15:20,633 DEBUG [RS:2;jenkins-hbase4:35969] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,35969,1689743720058' 2023-07-19 05:15:20,633 DEBUG [RS:2;jenkins-hbase4:35969] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-19 05:15:20,633 INFO [RS:0;jenkins-hbase4:46065] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-19 05:15:20,633 INFO [RS:0;jenkins-hbase4:46065] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-19 05:15:20,633 DEBUG [RS:1;jenkins-hbase4:41955] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-19 05:15:20,633 INFO [RS:1;jenkins-hbase4:41955] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-19 05:15:20,633 INFO [RS:1;jenkins-hbase4:41955] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-19 05:15:20,633 DEBUG [RS:2;jenkins-hbase4:35969] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-19 05:15:20,634 DEBUG [RS:2;jenkins-hbase4:35969] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-19 05:15:20,634 DEBUG [RS:2;jenkins-hbase4:35969] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-19 05:15:20,634 DEBUG [RS:2;jenkins-hbase4:35969] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,35969,1689743720058 2023-07-19 05:15:20,634 DEBUG [RS:2;jenkins-hbase4:35969] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,35969,1689743720058' 2023-07-19 05:15:20,634 DEBUG [RS:2;jenkins-hbase4:35969] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-19 05:15:20,634 DEBUG [RS:2;jenkins-hbase4:35969] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-19 05:15:20,634 DEBUG [RS:2;jenkins-hbase4:35969] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-19 05:15:20,635 INFO [RS:2;jenkins-hbase4:35969] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-19 05:15:20,635 INFO [RS:2;jenkins-hbase4:35969] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-19 05:15:20,663 DEBUG [jenkins-hbase4:39261] assignment.AssignmentManager(2176): Processing assignQueue; systemServersCount=3, allServersCount=3 2023-07-19 05:15:20,663 DEBUG [jenkins-hbase4:39261] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-19 05:15:20,663 DEBUG [jenkins-hbase4:39261] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-19 05:15:20,663 DEBUG [jenkins-hbase4:39261] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-19 05:15:20,663 DEBUG [jenkins-hbase4:39261] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-19 05:15:20,663 DEBUG [jenkins-hbase4:39261] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-19 05:15:20,664 INFO [PEWorker-3] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,46065,1689743719751, state=OPENING 2023-07-19 05:15:20,666 DEBUG [PEWorker-3] zookeeper.MetaTableLocator(240): hbase:meta region location doesn't exist, create it 2023-07-19 05:15:20,667 DEBUG [Listener at localhost/42441-EventThread] zookeeper.ZKWatcher(600): master:39261-0x1017c017df10000, quorum=127.0.0.1:51693, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-19 05:15:20,667 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=3, ppid=2, state=RUNNABLE; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,46065,1689743719751}] 2023-07-19 05:15:20,667 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-19 05:15:20,735 INFO [RS:0;jenkins-hbase4:46065] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C46065%2C1689743719751, suffix=, logDir=hdfs://localhost:44175/user/jenkins/test-data/760309a7-bdb9-f7fa-7419-a01ae2b00b2f/WALs/jenkins-hbase4.apache.org,46065,1689743719751, archiveDir=hdfs://localhost:44175/user/jenkins/test-data/760309a7-bdb9-f7fa-7419-a01ae2b00b2f/oldWALs, maxLogs=32 2023-07-19 05:15:20,736 INFO [RS:2;jenkins-hbase4:35969] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C35969%2C1689743720058, suffix=, logDir=hdfs://localhost:44175/user/jenkins/test-data/760309a7-bdb9-f7fa-7419-a01ae2b00b2f/WALs/jenkins-hbase4.apache.org,35969,1689743720058, archiveDir=hdfs://localhost:44175/user/jenkins/test-data/760309a7-bdb9-f7fa-7419-a01ae2b00b2f/oldWALs, maxLogs=32 2023-07-19 05:15:20,736 WARN [ReadOnlyZKClient-127.0.0.1:51693@0x0ba13049] client.ZKConnectionRegistry(168): Meta region is in state OPENING 2023-07-19 05:15:20,737 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,39261,1689743719572] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-19 05:15:20,739 INFO [RS-EventLoopGroup-13-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:51420, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-19 05:15:20,739 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=46065] ipc.CallRunner(144): callId: 0 service: ClientService methodName: Get size: 88 connection: 172.31.14.131:51420 deadline: 1689743780739, exception=org.apache.hadoop.hbase.NotServingRegionException: hbase:meta,,1 is not online on jenkins-hbase4.apache.org,46065,1689743719751 2023-07-19 05:15:20,735 INFO [RS:1;jenkins-hbase4:41955] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C41955%2C1689743719898, suffix=, logDir=hdfs://localhost:44175/user/jenkins/test-data/760309a7-bdb9-f7fa-7419-a01ae2b00b2f/WALs/jenkins-hbase4.apache.org,41955,1689743719898, archiveDir=hdfs://localhost:44175/user/jenkins/test-data/760309a7-bdb9-f7fa-7419-a01ae2b00b2f/oldWALs, maxLogs=32 2023-07-19 05:15:20,764 DEBUG [RS-EventLoopGroup-15-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:35727,DS-f3cc29e5-9316-4bf7-8e06-c992433b44f8,DISK] 2023-07-19 05:15:20,766 DEBUG [RS-EventLoopGroup-15-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:41985,DS-345d74be-872e-4d52-9cb2-51f897ffa631,DISK] 2023-07-19 05:15:20,766 DEBUG [RS-EventLoopGroup-15-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:38475,DS-74be87a0-13f3-435a-adac-f9bd6c929921,DISK] 2023-07-19 05:15:20,767 DEBUG [RS-EventLoopGroup-15-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:35727,DS-f3cc29e5-9316-4bf7-8e06-c992433b44f8,DISK] 2023-07-19 05:15:20,767 DEBUG [RS-EventLoopGroup-15-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:38475,DS-74be87a0-13f3-435a-adac-f9bd6c929921,DISK] 2023-07-19 05:15:20,768 DEBUG [RS-EventLoopGroup-15-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:41985,DS-345d74be-872e-4d52-9cb2-51f897ffa631,DISK] 2023-07-19 05:15:20,779 DEBUG [RS-EventLoopGroup-15-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:38475,DS-74be87a0-13f3-435a-adac-f9bd6c929921,DISK] 2023-07-19 05:15:20,779 DEBUG [RS-EventLoopGroup-15-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:41985,DS-345d74be-872e-4d52-9cb2-51f897ffa631,DISK] 2023-07-19 05:15:20,779 DEBUG [RS-EventLoopGroup-15-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:35727,DS-f3cc29e5-9316-4bf7-8e06-c992433b44f8,DISK] 2023-07-19 05:15:20,784 INFO [RS:2;jenkins-hbase4:35969] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/760309a7-bdb9-f7fa-7419-a01ae2b00b2f/WALs/jenkins-hbase4.apache.org,35969,1689743720058/jenkins-hbase4.apache.org%2C35969%2C1689743720058.1689743720736 2023-07-19 05:15:20,784 INFO [RS:0;jenkins-hbase4:46065] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/760309a7-bdb9-f7fa-7419-a01ae2b00b2f/WALs/jenkins-hbase4.apache.org,46065,1689743719751/jenkins-hbase4.apache.org%2C46065%2C1689743719751.1689743720736 2023-07-19 05:15:20,784 DEBUG [RS:2;jenkins-hbase4:35969] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:41985,DS-345d74be-872e-4d52-9cb2-51f897ffa631,DISK], DatanodeInfoWithStorage[127.0.0.1:38475,DS-74be87a0-13f3-435a-adac-f9bd6c929921,DISK], DatanodeInfoWithStorage[127.0.0.1:35727,DS-f3cc29e5-9316-4bf7-8e06-c992433b44f8,DISK]] 2023-07-19 05:15:20,785 INFO [RS:1;jenkins-hbase4:41955] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/760309a7-bdb9-f7fa-7419-a01ae2b00b2f/WALs/jenkins-hbase4.apache.org,41955,1689743719898/jenkins-hbase4.apache.org%2C41955%2C1689743719898.1689743720743 2023-07-19 05:15:20,785 DEBUG [RS:0;jenkins-hbase4:46065] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:41985,DS-345d74be-872e-4d52-9cb2-51f897ffa631,DISK], DatanodeInfoWithStorage[127.0.0.1:38475,DS-74be87a0-13f3-435a-adac-f9bd6c929921,DISK], DatanodeInfoWithStorage[127.0.0.1:35727,DS-f3cc29e5-9316-4bf7-8e06-c992433b44f8,DISK]] 2023-07-19 05:15:20,786 DEBUG [RS:1;jenkins-hbase4:41955] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:38475,DS-74be87a0-13f3-435a-adac-f9bd6c929921,DISK], DatanodeInfoWithStorage[127.0.0.1:41985,DS-345d74be-872e-4d52-9cb2-51f897ffa631,DISK], DatanodeInfoWithStorage[127.0.0.1:35727,DS-f3cc29e5-9316-4bf7-8e06-c992433b44f8,DISK]] 2023-07-19 05:15:20,824 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,46065,1689743719751 2023-07-19 05:15:20,827 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-19 05:15:20,828 INFO [RS-EventLoopGroup-13-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:51424, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-19 05:15:20,832 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:meta,,1.1588230740 2023-07-19 05:15:20,833 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-19 05:15:20,834 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C46065%2C1689743719751.meta, suffix=.meta, logDir=hdfs://localhost:44175/user/jenkins/test-data/760309a7-bdb9-f7fa-7419-a01ae2b00b2f/WALs/jenkins-hbase4.apache.org,46065,1689743719751, archiveDir=hdfs://localhost:44175/user/jenkins/test-data/760309a7-bdb9-f7fa-7419-a01ae2b00b2f/oldWALs, maxLogs=32 2023-07-19 05:15:20,849 DEBUG [RS-EventLoopGroup-15-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:41985,DS-345d74be-872e-4d52-9cb2-51f897ffa631,DISK] 2023-07-19 05:15:20,849 DEBUG [RS-EventLoopGroup-15-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:38475,DS-74be87a0-13f3-435a-adac-f9bd6c929921,DISK] 2023-07-19 05:15:20,849 DEBUG [RS-EventLoopGroup-15-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:35727,DS-f3cc29e5-9316-4bf7-8e06-c992433b44f8,DISK] 2023-07-19 05:15:20,852 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/760309a7-bdb9-f7fa-7419-a01ae2b00b2f/WALs/jenkins-hbase4.apache.org,46065,1689743719751/jenkins-hbase4.apache.org%2C46065%2C1689743719751.meta.1689743720835.meta 2023-07-19 05:15:20,853 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:41985,DS-345d74be-872e-4d52-9cb2-51f897ffa631,DISK], DatanodeInfoWithStorage[127.0.0.1:35727,DS-f3cc29e5-9316-4bf7-8e06-c992433b44f8,DISK], DatanodeInfoWithStorage[127.0.0.1:38475,DS-74be87a0-13f3-435a-adac-f9bd6c929921,DISK]] 2023-07-19 05:15:20,854 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2023-07-19 05:15:20,854 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-19 05:15:20,854 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService 2023-07-19 05:15:20,854 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2023-07-19 05:15:20,854 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table meta 1588230740 2023-07-19 05:15:20,854 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-19 05:15:20,854 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 1588230740 2023-07-19 05:15:20,854 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 1588230740 2023-07-19 05:15:20,855 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-19 05:15:20,856 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44175/user/jenkins/test-data/760309a7-bdb9-f7fa-7419-a01ae2b00b2f/data/hbase/meta/1588230740/info 2023-07-19 05:15:20,857 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44175/user/jenkins/test-data/760309a7-bdb9-f7fa-7419-a01ae2b00b2f/data/hbase/meta/1588230740/info 2023-07-19 05:15:20,857 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-19 05:15:20,857 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-19 05:15:20,858 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-19 05:15:20,859 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44175/user/jenkins/test-data/760309a7-bdb9-f7fa-7419-a01ae2b00b2f/data/hbase/meta/1588230740/rep_barrier 2023-07-19 05:15:20,859 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44175/user/jenkins/test-data/760309a7-bdb9-f7fa-7419-a01ae2b00b2f/data/hbase/meta/1588230740/rep_barrier 2023-07-19 05:15:20,859 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-19 05:15:20,859 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-19 05:15:20,859 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-19 05:15:20,860 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44175/user/jenkins/test-data/760309a7-bdb9-f7fa-7419-a01ae2b00b2f/data/hbase/meta/1588230740/table 2023-07-19 05:15:20,860 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44175/user/jenkins/test-data/760309a7-bdb9-f7fa-7419-a01ae2b00b2f/data/hbase/meta/1588230740/table 2023-07-19 05:15:20,861 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-19 05:15:20,861 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-19 05:15:20,862 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44175/user/jenkins/test-data/760309a7-bdb9-f7fa-7419-a01ae2b00b2f/data/hbase/meta/1588230740 2023-07-19 05:15:20,863 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44175/user/jenkins/test-data/760309a7-bdb9-f7fa-7419-a01ae2b00b2f/data/hbase/meta/1588230740 2023-07-19 05:15:20,865 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-19 05:15:20,866 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-19 05:15:20,867 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10162110400, jitterRate=-0.05357971787452698}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-19 05:15:20,867 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-19 05:15:20,868 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:meta,,1.1588230740, pid=3, masterSystemTime=1689743720824 2023-07-19 05:15:20,873 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:meta,,1.1588230740 2023-07-19 05:15:20,873 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:meta,,1.1588230740 2023-07-19 05:15:20,874 INFO [PEWorker-5] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,46065,1689743719751, state=OPEN 2023-07-19 05:15:20,875 DEBUG [Listener at localhost/42441-EventThread] zookeeper.ZKWatcher(600): master:39261-0x1017c017df10000, quorum=127.0.0.1:51693, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-07-19 05:15:20,875 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-19 05:15:20,879 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=3, resume processing ppid=2 2023-07-19 05:15:20,879 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=3, ppid=2, state=SUCCESS; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,46065,1689743719751 in 208 msec 2023-07-19 05:15:20,880 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=2, resume processing ppid=1 2023-07-19 05:15:20,880 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=2, ppid=1, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN in 369 msec 2023-07-19 05:15:20,882 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=1, state=SUCCESS; InitMetaProcedure table=hbase:meta in 452 msec 2023-07-19 05:15:20,882 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(953): Wait for region servers to report in: status=null, state=RUNNING, startTime=1689743720882, completionTime=-1 2023-07-19 05:15:20,882 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ServerManager(821): Finished waiting on RegionServer count=3; waited=0ms, expected min=3 server(s), max=3 server(s), master is running 2023-07-19 05:15:20,882 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1517): Joining cluster... 2023-07-19 05:15:20,887 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1529): Number of RegionServers=3 2023-07-19 05:15:20,887 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$RegionInTransitionChore; timeout=60000, timestamp=1689743780887 2023-07-19 05:15:20,887 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$DeadServerMetricRegionChore; timeout=120000, timestamp=1689743840887 2023-07-19 05:15:20,887 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1536): Joined the cluster in 4 msec 2023-07-19 05:15:20,894 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,39261,1689743719572-ClusterStatusChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-19 05:15:20,895 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,39261,1689743719572-BalancerChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-19 05:15:20,895 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,39261,1689743719572-RegionNormalizerChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-19 05:15:20,895 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=CatalogJanitor-jenkins-hbase4:39261, period=300000, unit=MILLISECONDS is enabled. 2023-07-19 05:15:20,895 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HbckChore-, period=3600000, unit=MILLISECONDS is enabled. 2023-07-19 05:15:20,895 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.TableNamespaceManager(92): Namespace table not found. Creating... 2023-07-19 05:15:20,895 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2148): Client=null/null create 'hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-07-19 05:15:20,898 DEBUG [master/jenkins-hbase4:0.Chore.1] janitor.CatalogJanitor(175): 2023-07-19 05:15:20,898 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:namespace 2023-07-19 05:15:20,900 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_PRE_OPERATION 2023-07-19 05:15:20,901 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-19 05:15:20,902 DEBUG [HFileArchiver-5] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:44175/user/jenkins/test-data/760309a7-bdb9-f7fa-7419-a01ae2b00b2f/.tmp/data/hbase/namespace/b9b46c9a9a26be1b89c10c145434cfe8 2023-07-19 05:15:20,903 DEBUG [HFileArchiver-5] backup.HFileArchiver(153): Directory hdfs://localhost:44175/user/jenkins/test-data/760309a7-bdb9-f7fa-7419-a01ae2b00b2f/.tmp/data/hbase/namespace/b9b46c9a9a26be1b89c10c145434cfe8 empty. 2023-07-19 05:15:20,903 DEBUG [HFileArchiver-5] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:44175/user/jenkins/test-data/760309a7-bdb9-f7fa-7419-a01ae2b00b2f/.tmp/data/hbase/namespace/b9b46c9a9a26be1b89c10c145434cfe8 2023-07-19 05:15:20,903 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived hbase:namespace regions 2023-07-19 05:15:20,916 DEBUG [PEWorker-3] util.FSTableDescriptors(570): Wrote into hdfs://localhost:44175/user/jenkins/test-data/760309a7-bdb9-f7fa-7419-a01ae2b00b2f/.tmp/data/hbase/namespace/.tabledesc/.tableinfo.0000000001 2023-07-19 05:15:20,918 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(7675): creating {ENCODED => b9b46c9a9a26be1b89c10c145434cfe8, NAME => 'hbase:namespace,,1689743720895.b9b46c9a9a26be1b89c10c145434cfe8.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:44175/user/jenkins/test-data/760309a7-bdb9-f7fa-7419-a01ae2b00b2f/.tmp 2023-07-19 05:15:20,931 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1689743720895.b9b46c9a9a26be1b89c10c145434cfe8.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-19 05:15:20,931 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1604): Closing b9b46c9a9a26be1b89c10c145434cfe8, disabling compactions & flushes 2023-07-19 05:15:20,931 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1689743720895.b9b46c9a9a26be1b89c10c145434cfe8. 2023-07-19 05:15:20,931 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1689743720895.b9b46c9a9a26be1b89c10c145434cfe8. 2023-07-19 05:15:20,931 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1689743720895.b9b46c9a9a26be1b89c10c145434cfe8. after waiting 0 ms 2023-07-19 05:15:20,931 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1689743720895.b9b46c9a9a26be1b89c10c145434cfe8. 2023-07-19 05:15:20,931 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1838): Closed hbase:namespace,,1689743720895.b9b46c9a9a26be1b89c10c145434cfe8. 2023-07-19 05:15:20,931 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1558): Region close journal for b9b46c9a9a26be1b89c10c145434cfe8: 2023-07-19 05:15:20,933 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ADD_TO_META 2023-07-19 05:15:20,934 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:namespace,,1689743720895.b9b46c9a9a26be1b89c10c145434cfe8.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689743720934"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689743720934"}]},"ts":"1689743720934"} 2023-07-19 05:15:20,937 INFO [PEWorker-3] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-19 05:15:20,938 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-19 05:15:20,938 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689743720938"}]},"ts":"1689743720938"} 2023-07-19 05:15:20,939 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLING in hbase:meta 2023-07-19 05:15:20,942 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-19 05:15:20,942 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-19 05:15:20,942 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-19 05:15:20,942 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-19 05:15:20,942 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-19 05:15:20,943 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=b9b46c9a9a26be1b89c10c145434cfe8, ASSIGN}] 2023-07-19 05:15:20,945 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=b9b46c9a9a26be1b89c10c145434cfe8, ASSIGN 2023-07-19 05:15:20,949 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:namespace, region=b9b46c9a9a26be1b89c10c145434cfe8, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,41955,1689743719898; forceNewPlan=false, retain=false 2023-07-19 05:15:21,041 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,39261,1689743719572] master.HMaster(2148): Client=null/null create 'hbase:rsgroup', {TABLE_ATTRIBUTES => {coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|', METADATA => {'SPLIT_POLICY' => 'org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy'}}}, {NAME => 'm', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-19 05:15:21,044 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,39261,1689743719572] procedure2.ProcedureExecutor(1029): Stored pid=6, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:rsgroup 2023-07-19 05:15:21,045 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_PRE_OPERATION 2023-07-19 05:15:21,046 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-19 05:15:21,048 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:44175/user/jenkins/test-data/760309a7-bdb9-f7fa-7419-a01ae2b00b2f/.tmp/data/hbase/rsgroup/83121727a02707589af30990e9f79713 2023-07-19 05:15:21,048 DEBUG [HFileArchiver-1] backup.HFileArchiver(153): Directory hdfs://localhost:44175/user/jenkins/test-data/760309a7-bdb9-f7fa-7419-a01ae2b00b2f/.tmp/data/hbase/rsgroup/83121727a02707589af30990e9f79713 empty. 2023-07-19 05:15:21,049 DEBUG [HFileArchiver-1] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:44175/user/jenkins/test-data/760309a7-bdb9-f7fa-7419-a01ae2b00b2f/.tmp/data/hbase/rsgroup/83121727a02707589af30990e9f79713 2023-07-19 05:15:21,049 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(328): Archived hbase:rsgroup regions 2023-07-19 05:15:21,064 DEBUG [PEWorker-5] util.FSTableDescriptors(570): Wrote into hdfs://localhost:44175/user/jenkins/test-data/760309a7-bdb9-f7fa-7419-a01ae2b00b2f/.tmp/data/hbase/rsgroup/.tabledesc/.tableinfo.0000000001 2023-07-19 05:15:21,065 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(7675): creating {ENCODED => 83121727a02707589af30990e9f79713, NAME => 'hbase:rsgroup,,1689743721041.83121727a02707589af30990e9f79713.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:rsgroup', {TABLE_ATTRIBUTES => {coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|', METADATA => {'SPLIT_POLICY' => 'org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy'}}}, {NAME => 'm', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:44175/user/jenkins/test-data/760309a7-bdb9-f7fa-7419-a01ae2b00b2f/.tmp 2023-07-19 05:15:21,081 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1689743721041.83121727a02707589af30990e9f79713.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-19 05:15:21,082 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1604): Closing 83121727a02707589af30990e9f79713, disabling compactions & flushes 2023-07-19 05:15:21,082 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1689743721041.83121727a02707589af30990e9f79713. 2023-07-19 05:15:21,082 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1689743721041.83121727a02707589af30990e9f79713. 2023-07-19 05:15:21,082 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1689743721041.83121727a02707589af30990e9f79713. after waiting 0 ms 2023-07-19 05:15:21,082 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1689743721041.83121727a02707589af30990e9f79713. 2023-07-19 05:15:21,082 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1689743721041.83121727a02707589af30990e9f79713. 2023-07-19 05:15:21,082 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1558): Region close journal for 83121727a02707589af30990e9f79713: 2023-07-19 05:15:21,084 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_ADD_TO_META 2023-07-19 05:15:21,085 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:rsgroup,,1689743721041.83121727a02707589af30990e9f79713.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689743721085"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689743721085"}]},"ts":"1689743721085"} 2023-07-19 05:15:21,086 INFO [PEWorker-5] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-19 05:15:21,087 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-19 05:15:21,087 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:rsgroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689743721087"}]},"ts":"1689743721087"} 2023-07-19 05:15:21,088 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=hbase:rsgroup, state=ENABLING in hbase:meta 2023-07-19 05:15:21,092 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-19 05:15:21,092 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-19 05:15:21,092 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-19 05:15:21,092 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-19 05:15:21,092 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-19 05:15:21,092 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=7, ppid=6, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:rsgroup, region=83121727a02707589af30990e9f79713, ASSIGN}] 2023-07-19 05:15:21,093 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=7, ppid=6, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:rsgroup, region=83121727a02707589af30990e9f79713, ASSIGN 2023-07-19 05:15:21,094 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=7, ppid=6, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:rsgroup, region=83121727a02707589af30990e9f79713, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,35969,1689743720058; forceNewPlan=false, retain=false 2023-07-19 05:15:21,094 INFO [jenkins-hbase4:39261] balancer.BaseLoadBalancer(1545): Reassigned 2 regions. 2 retained the pre-restart assignment. 2023-07-19 05:15:21,096 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=b9b46c9a9a26be1b89c10c145434cfe8, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,41955,1689743719898 2023-07-19 05:15:21,097 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1689743720895.b9b46c9a9a26be1b89c10c145434cfe8.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689743721096"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689743721096"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689743721096"}]},"ts":"1689743721096"} 2023-07-19 05:15:21,097 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=7 updating hbase:meta row=83121727a02707589af30990e9f79713, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,35969,1689743720058 2023-07-19 05:15:21,097 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:rsgroup,,1689743721041.83121727a02707589af30990e9f79713.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689743721097"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689743721097"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689743721097"}]},"ts":"1689743721097"} 2023-07-19 05:15:21,099 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=8, ppid=5, state=RUNNABLE; OpenRegionProcedure b9b46c9a9a26be1b89c10c145434cfe8, server=jenkins-hbase4.apache.org,41955,1689743719898}] 2023-07-19 05:15:21,103 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=9, ppid=7, state=RUNNABLE; OpenRegionProcedure 83121727a02707589af30990e9f79713, server=jenkins-hbase4.apache.org,35969,1689743720058}] 2023-07-19 05:15:21,255 DEBUG [RSProcedureDispatcher-pool-1] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,41955,1689743719898 2023-07-19 05:15:21,255 DEBUG [RSProcedureDispatcher-pool-1] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-19 05:15:21,257 DEBUG [RSProcedureDispatcher-pool-2] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,35969,1689743720058 2023-07-19 05:15:21,258 DEBUG [RSProcedureDispatcher-pool-2] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-19 05:15:21,258 INFO [RS-EventLoopGroup-14-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:52288, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-19 05:15:21,259 INFO [RS-EventLoopGroup-15-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:38436, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-19 05:15:21,263 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:rsgroup,,1689743721041.83121727a02707589af30990e9f79713. 2023-07-19 05:15:21,264 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 83121727a02707589af30990e9f79713, NAME => 'hbase:rsgroup,,1689743721041.83121727a02707589af30990e9f79713.', STARTKEY => '', ENDKEY => ''} 2023-07-19 05:15:21,264 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-19 05:15:21,264 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:rsgroup,,1689743721041.83121727a02707589af30990e9f79713. service=MultiRowMutationService 2023-07-19 05:15:21,264 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:rsgroup successfully. 2023-07-19 05:15:21,264 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table rsgroup 83121727a02707589af30990e9f79713 2023-07-19 05:15:21,264 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1689743721041.83121727a02707589af30990e9f79713.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-19 05:15:21,264 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 83121727a02707589af30990e9f79713 2023-07-19 05:15:21,264 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 83121727a02707589af30990e9f79713 2023-07-19 05:15:21,267 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:namespace,,1689743720895.b9b46c9a9a26be1b89c10c145434cfe8. 2023-07-19 05:15:21,267 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => b9b46c9a9a26be1b89c10c145434cfe8, NAME => 'hbase:namespace,,1689743720895.b9b46c9a9a26be1b89c10c145434cfe8.', STARTKEY => '', ENDKEY => ''} 2023-07-19 05:15:21,268 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table namespace b9b46c9a9a26be1b89c10c145434cfe8 2023-07-19 05:15:21,268 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1689743720895.b9b46c9a9a26be1b89c10c145434cfe8.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-19 05:15:21,268 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for b9b46c9a9a26be1b89c10c145434cfe8 2023-07-19 05:15:21,268 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for b9b46c9a9a26be1b89c10c145434cfe8 2023-07-19 05:15:21,268 INFO [StoreOpener-83121727a02707589af30990e9f79713-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family m of region 83121727a02707589af30990e9f79713 2023-07-19 05:15:21,269 INFO [StoreOpener-b9b46c9a9a26be1b89c10c145434cfe8-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region b9b46c9a9a26be1b89c10c145434cfe8 2023-07-19 05:15:21,270 DEBUG [StoreOpener-83121727a02707589af30990e9f79713-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44175/user/jenkins/test-data/760309a7-bdb9-f7fa-7419-a01ae2b00b2f/data/hbase/rsgroup/83121727a02707589af30990e9f79713/m 2023-07-19 05:15:21,270 DEBUG [StoreOpener-83121727a02707589af30990e9f79713-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44175/user/jenkins/test-data/760309a7-bdb9-f7fa-7419-a01ae2b00b2f/data/hbase/rsgroup/83121727a02707589af30990e9f79713/m 2023-07-19 05:15:21,270 INFO [StoreOpener-83121727a02707589af30990e9f79713-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 83121727a02707589af30990e9f79713 columnFamilyName m 2023-07-19 05:15:21,271 INFO [StoreOpener-83121727a02707589af30990e9f79713-1] regionserver.HStore(310): Store=83121727a02707589af30990e9f79713/m, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-19 05:15:21,271 DEBUG [StoreOpener-b9b46c9a9a26be1b89c10c145434cfe8-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44175/user/jenkins/test-data/760309a7-bdb9-f7fa-7419-a01ae2b00b2f/data/hbase/namespace/b9b46c9a9a26be1b89c10c145434cfe8/info 2023-07-19 05:15:21,271 DEBUG [StoreOpener-b9b46c9a9a26be1b89c10c145434cfe8-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44175/user/jenkins/test-data/760309a7-bdb9-f7fa-7419-a01ae2b00b2f/data/hbase/namespace/b9b46c9a9a26be1b89c10c145434cfe8/info 2023-07-19 05:15:21,272 INFO [StoreOpener-b9b46c9a9a26be1b89c10c145434cfe8-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region b9b46c9a9a26be1b89c10c145434cfe8 columnFamilyName info 2023-07-19 05:15:21,272 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44175/user/jenkins/test-data/760309a7-bdb9-f7fa-7419-a01ae2b00b2f/data/hbase/rsgroup/83121727a02707589af30990e9f79713 2023-07-19 05:15:21,272 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44175/user/jenkins/test-data/760309a7-bdb9-f7fa-7419-a01ae2b00b2f/data/hbase/rsgroup/83121727a02707589af30990e9f79713 2023-07-19 05:15:21,272 INFO [StoreOpener-b9b46c9a9a26be1b89c10c145434cfe8-1] regionserver.HStore(310): Store=b9b46c9a9a26be1b89c10c145434cfe8/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-19 05:15:21,273 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44175/user/jenkins/test-data/760309a7-bdb9-f7fa-7419-a01ae2b00b2f/data/hbase/namespace/b9b46c9a9a26be1b89c10c145434cfe8 2023-07-19 05:15:21,274 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44175/user/jenkins/test-data/760309a7-bdb9-f7fa-7419-a01ae2b00b2f/data/hbase/namespace/b9b46c9a9a26be1b89c10c145434cfe8 2023-07-19 05:15:21,275 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 83121727a02707589af30990e9f79713 2023-07-19 05:15:21,277 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for b9b46c9a9a26be1b89c10c145434cfe8 2023-07-19 05:15:21,281 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44175/user/jenkins/test-data/760309a7-bdb9-f7fa-7419-a01ae2b00b2f/data/hbase/rsgroup/83121727a02707589af30990e9f79713/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-19 05:15:21,281 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44175/user/jenkins/test-data/760309a7-bdb9-f7fa-7419-a01ae2b00b2f/data/hbase/namespace/b9b46c9a9a26be1b89c10c145434cfe8/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-19 05:15:21,282 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 83121727a02707589af30990e9f79713; next sequenceid=2; org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy@b93858, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-19 05:15:21,282 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 83121727a02707589af30990e9f79713: 2023-07-19 05:15:21,282 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened b9b46c9a9a26be1b89c10c145434cfe8; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11496353920, jitterRate=0.07068139314651489}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-19 05:15:21,282 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for b9b46c9a9a26be1b89c10c145434cfe8: 2023-07-19 05:15:21,284 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:namespace,,1689743720895.b9b46c9a9a26be1b89c10c145434cfe8., pid=8, masterSystemTime=1689743721255 2023-07-19 05:15:21,285 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:rsgroup,,1689743721041.83121727a02707589af30990e9f79713., pid=9, masterSystemTime=1689743721257 2023-07-19 05:15:21,290 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:namespace,,1689743720895.b9b46c9a9a26be1b89c10c145434cfe8. 2023-07-19 05:15:21,291 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:namespace,,1689743720895.b9b46c9a9a26be1b89c10c145434cfe8. 2023-07-19 05:15:21,291 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=b9b46c9a9a26be1b89c10c145434cfe8, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,41955,1689743719898 2023-07-19 05:15:21,291 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:namespace,,1689743720895.b9b46c9a9a26be1b89c10c145434cfe8.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689743721291"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689743721291"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689743721291"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689743721291"}]},"ts":"1689743721291"} 2023-07-19 05:15:21,292 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=7 updating hbase:meta row=83121727a02707589af30990e9f79713, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,35969,1689743720058 2023-07-19 05:15:21,292 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:rsgroup,,1689743721041.83121727a02707589af30990e9f79713.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689743721292"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689743721292"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689743721292"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689743721292"}]},"ts":"1689743721292"} 2023-07-19 05:15:21,292 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:rsgroup,,1689743721041.83121727a02707589af30990e9f79713. 2023-07-19 05:15:21,295 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=8, resume processing ppid=5 2023-07-19 05:15:21,295 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=8, ppid=5, state=SUCCESS; OpenRegionProcedure b9b46c9a9a26be1b89c10c145434cfe8, server=jenkins-hbase4.apache.org,41955,1689743719898 in 194 msec 2023-07-19 05:15:21,296 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=9, resume processing ppid=7 2023-07-19 05:15:21,296 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=9, ppid=7, state=SUCCESS; OpenRegionProcedure 83121727a02707589af30990e9f79713, server=jenkins-hbase4.apache.org,35969,1689743720058 in 191 msec 2023-07-19 05:15:21,297 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=5, resume processing ppid=4 2023-07-19 05:15:21,297 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=5, ppid=4, state=SUCCESS; TransitRegionStateProcedure table=hbase:namespace, region=b9b46c9a9a26be1b89c10c145434cfe8, ASSIGN in 352 msec 2023-07-19 05:15:21,298 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-19 05:15:21,298 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=7, resume processing ppid=6 2023-07-19 05:15:21,298 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=7, ppid=6, state=SUCCESS; TransitRegionStateProcedure table=hbase:rsgroup, region=83121727a02707589af30990e9f79713, ASSIGN in 204 msec 2023-07-19 05:15:21,298 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689743721298"}]},"ts":"1689743721298"} 2023-07-19 05:15:21,298 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-19 05:15:21,298 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:rsgroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689743721298"}]},"ts":"1689743721298"} 2023-07-19 05:15:21,299 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLED in hbase:meta 2023-07-19 05:15:21,300 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=hbase:rsgroup, state=ENABLED in hbase:meta 2023-07-19 05:15:21,301 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_POST_OPERATION 2023-07-19 05:15:21,302 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:39261-0x1017c017df10000, quorum=127.0.0.1:51693, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/namespace 2023-07-19 05:15:21,304 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:rsgroup,,1689743721041.83121727a02707589af30990e9f79713. 2023-07-19 05:15:21,304 DEBUG [Listener at localhost/42441-EventThread] zookeeper.ZKWatcher(600): master:39261-0x1017c017df10000, quorum=127.0.0.1:51693, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/namespace 2023-07-19 05:15:21,304 DEBUG [Listener at localhost/42441-EventThread] zookeeper.ZKWatcher(600): master:39261-0x1017c017df10000, quorum=127.0.0.1:51693, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-19 05:15:21,306 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_POST_OPERATION 2023-07-19 05:15:21,306 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=4, state=SUCCESS; CreateTableProcedure table=hbase:namespace in 406 msec 2023-07-19 05:15:21,309 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-19 05:15:21,309 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=6, state=SUCCESS; CreateTableProcedure table=hbase:rsgroup in 265 msec 2023-07-19 05:15:21,310 INFO [RS-EventLoopGroup-14-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:52292, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-19 05:15:21,313 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=10, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=default 2023-07-19 05:15:21,324 DEBUG [Listener at localhost/42441-EventThread] zookeeper.ZKWatcher(600): master:39261-0x1017c017df10000, quorum=127.0.0.1:51693, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-19 05:15:21,327 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=10, state=SUCCESS; CreateNamespaceProcedure, namespace=default in 13 msec 2023-07-19 05:15:21,335 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=11, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=hbase 2023-07-19 05:15:21,343 DEBUG [Listener at localhost/42441-EventThread] zookeeper.ZKWatcher(600): master:39261-0x1017c017df10000, quorum=127.0.0.1:51693, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-19 05:15:21,346 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,39261,1689743719572] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-19 05:15:21,347 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=11, state=SUCCESS; CreateNamespaceProcedure, namespace=hbase in 12 msec 2023-07-19 05:15:21,347 INFO [RS-EventLoopGroup-15-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:38440, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-19 05:15:21,349 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,39261,1689743719572] rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker(840): RSGroup table=hbase:rsgroup is online, refreshing cached information 2023-07-19 05:15:21,349 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,39261,1689743719572] rsgroup.RSGroupInfoManagerImpl(534): Refreshing in Online mode. 2023-07-19 05:15:21,353 DEBUG [Listener at localhost/42441-EventThread] zookeeper.ZKWatcher(600): master:39261-0x1017c017df10000, quorum=127.0.0.1:51693, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-19 05:15:21,353 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,39261,1689743719572] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 05:15:21,354 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,39261,1689743719572] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-19 05:15:21,356 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,39261,1689743719572] rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker(823): GroupBasedLoadBalancer is now online 2023-07-19 05:15:21,359 DEBUG [Listener at localhost/42441-EventThread] zookeeper.ZKWatcher(600): master:39261-0x1017c017df10000, quorum=127.0.0.1:51693, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/default 2023-07-19 05:15:21,361 DEBUG [Listener at localhost/42441-EventThread] zookeeper.ZKWatcher(600): master:39261-0x1017c017df10000, quorum=127.0.0.1:51693, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/hbase 2023-07-19 05:15:21,361 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1083): Master has completed initialization 1.079sec 2023-07-19 05:15:21,361 INFO [master/jenkins-hbase4:0:becomeActiveMaster] quotas.MasterQuotaManager(97): Quota support disabled 2023-07-19 05:15:21,361 INFO [master/jenkins-hbase4:0:becomeActiveMaster] slowlog.SlowLogMasterService(57): Slow/Large requests logging to system table hbase:slowlog is disabled. Quitting. 2023-07-19 05:15:21,361 INFO [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKWatcher(269): not a secure deployment, proceeding 2023-07-19 05:15:21,361 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,39261,1689743719572-ExpiredMobFileCleanerChore, period=86400, unit=SECONDS is enabled. 2023-07-19 05:15:21,361 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,39261,1689743719572-MobCompactionChore, period=604800, unit=SECONDS is enabled. 2023-07-19 05:15:21,364 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1175): Balancer post startup initialization complete, took 0 seconds 2023-07-19 05:15:21,405 DEBUG [Listener at localhost/42441] zookeeper.ReadOnlyZKClient(139): Connect 0x788676c3 to 127.0.0.1:51693 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-19 05:15:21,411 DEBUG [Listener at localhost/42441] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@1fa67591, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-19 05:15:21,412 DEBUG [hconnection-0xcbd9471-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-19 05:15:21,414 INFO [RS-EventLoopGroup-13-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:51434, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-19 05:15:21,415 INFO [Listener at localhost/42441] hbase.HBaseTestingUtility(1145): Minicluster is up; activeMaster=jenkins-hbase4.apache.org,39261,1689743719572 2023-07-19 05:15:21,416 INFO [Listener at localhost/42441] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-19 05:15:21,418 DEBUG [Listener at localhost/42441] ipc.RpcConnection(124): Using SIMPLE authentication for service=MasterService, sasl=false 2023-07-19 05:15:21,419 INFO [RS-EventLoopGroup-12-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:37146, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=MasterService 2023-07-19 05:15:21,422 DEBUG [Listener at localhost/42441-EventThread] zookeeper.ZKWatcher(600): master:39261-0x1017c017df10000, quorum=127.0.0.1:51693, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/balancer 2023-07-19 05:15:21,422 DEBUG [Listener at localhost/42441-EventThread] zookeeper.ZKWatcher(600): master:39261-0x1017c017df10000, quorum=127.0.0.1:51693, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-19 05:15:21,423 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39261] master.MasterRpcServices(492): Client=jenkins//172.31.14.131 set balanceSwitch=false 2023-07-19 05:15:21,423 DEBUG [Listener at localhost/42441] zookeeper.ReadOnlyZKClient(139): Connect 0x46ee48d8 to 127.0.0.1:51693 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-19 05:15:21,427 DEBUG [Listener at localhost/42441] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@26105268, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-19 05:15:21,427 INFO [Listener at localhost/42441] zookeeper.RecoverableZooKeeper(93): Process identifier=VerifyingRSGroupAdminClient connecting to ZooKeeper ensemble=127.0.0.1:51693 2023-07-19 05:15:21,434 DEBUG [Listener at localhost/42441-EventThread] zookeeper.ZKWatcher(600): VerifyingRSGroupAdminClient0x0, quorum=127.0.0.1:51693, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-19 05:15:21,435 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): VerifyingRSGroupAdminClient-0x1017c017df1000a connected 2023-07-19 05:15:21,436 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39261] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-19 05:15:21,437 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39261] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-19 05:15:21,440 INFO [Listener at localhost/42441] rsgroup.TestRSGroupsBase(152): Restoring servers: 1 2023-07-19 05:15:21,452 INFO [Listener at localhost/42441] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-19 05:15:21,452 INFO [Listener at localhost/42441] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-19 05:15:21,452 INFO [Listener at localhost/42441] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-19 05:15:21,452 INFO [Listener at localhost/42441] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-19 05:15:21,452 INFO [Listener at localhost/42441] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-19 05:15:21,452 INFO [Listener at localhost/42441] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-19 05:15:21,452 INFO [Listener at localhost/42441] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-19 05:15:21,453 INFO [Listener at localhost/42441] ipc.NettyRpcServer(120): Bind to /172.31.14.131:34765 2023-07-19 05:15:21,453 INFO [Listener at localhost/42441] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-19 05:15:21,455 DEBUG [Listener at localhost/42441] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-19 05:15:21,455 INFO [Listener at localhost/42441] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-19 05:15:21,456 INFO [Listener at localhost/42441] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-19 05:15:21,457 INFO [Listener at localhost/42441] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:34765 connecting to ZooKeeper ensemble=127.0.0.1:51693 2023-07-19 05:15:21,460 DEBUG [Listener at localhost/42441-EventThread] zookeeper.ZKWatcher(600): regionserver:347650x0, quorum=127.0.0.1:51693, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-19 05:15:21,462 DEBUG [Listener at localhost/42441] zookeeper.ZKUtil(162): regionserver:347650x0, quorum=127.0.0.1:51693, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-19 05:15:21,463 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:34765-0x1017c017df1000b connected 2023-07-19 05:15:21,463 DEBUG [Listener at localhost/42441] zookeeper.ZKUtil(162): regionserver:34765-0x1017c017df1000b, quorum=127.0.0.1:51693, baseZNode=/hbase Set watcher on existing znode=/hbase/running 2023-07-19 05:15:21,464 DEBUG [Listener at localhost/42441] zookeeper.ZKUtil(164): regionserver:34765-0x1017c017df1000b, quorum=127.0.0.1:51693, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-19 05:15:21,466 DEBUG [Listener at localhost/42441] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=34765 2023-07-19 05:15:21,466 DEBUG [Listener at localhost/42441] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=34765 2023-07-19 05:15:21,466 DEBUG [Listener at localhost/42441] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=34765 2023-07-19 05:15:21,466 DEBUG [Listener at localhost/42441] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=34765 2023-07-19 05:15:21,469 DEBUG [Listener at localhost/42441] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=34765 2023-07-19 05:15:21,471 INFO [Listener at localhost/42441] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-19 05:15:21,471 INFO [Listener at localhost/42441] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-19 05:15:21,471 INFO [Listener at localhost/42441] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-19 05:15:21,472 INFO [Listener at localhost/42441] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-19 05:15:21,472 INFO [Listener at localhost/42441] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-19 05:15:21,472 INFO [Listener at localhost/42441] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-19 05:15:21,472 INFO [Listener at localhost/42441] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-19 05:15:21,472 INFO [Listener at localhost/42441] http.HttpServer(1146): Jetty bound to port 37227 2023-07-19 05:15:21,472 INFO [Listener at localhost/42441] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-19 05:15:21,476 INFO [Listener at localhost/42441] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-19 05:15:21,476 INFO [Listener at localhost/42441] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@23f6f579{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/4e3b5c94-7503-0e39-0800-596c759cd5e5/hadoop.log.dir/,AVAILABLE} 2023-07-19 05:15:21,476 INFO [Listener at localhost/42441] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-19 05:15:21,476 INFO [Listener at localhost/42441] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@4e2d3074{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-19 05:15:21,589 INFO [Listener at localhost/42441] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-19 05:15:21,589 INFO [Listener at localhost/42441] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-19 05:15:21,590 INFO [Listener at localhost/42441] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-19 05:15:21,590 INFO [Listener at localhost/42441] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-19 05:15:21,591 INFO [Listener at localhost/42441] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-19 05:15:21,591 INFO [Listener at localhost/42441] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@33342480{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/4e3b5c94-7503-0e39-0800-596c759cd5e5/java.io.tmpdir/jetty-0_0_0_0-37227-hbase-server-2_4_18-SNAPSHOT_jar-_-any-6927492845219708815/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-19 05:15:21,593 INFO [Listener at localhost/42441] server.AbstractConnector(333): Started ServerConnector@4a3ed2e7{HTTP/1.1, (http/1.1)}{0.0.0.0:37227} 2023-07-19 05:15:21,593 INFO [Listener at localhost/42441] server.Server(415): Started @46021ms 2023-07-19 05:15:21,596 INFO [RS:3;jenkins-hbase4:34765] regionserver.HRegionServer(951): ClusterId : 9a8a703c-8c1c-4564-ac22-7d6ba9a86997 2023-07-19 05:15:21,597 DEBUG [RS:3;jenkins-hbase4:34765] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-19 05:15:21,599 DEBUG [RS:3;jenkins-hbase4:34765] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-19 05:15:21,599 DEBUG [RS:3;jenkins-hbase4:34765] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-19 05:15:21,600 DEBUG [RS:3;jenkins-hbase4:34765] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-19 05:15:21,601 DEBUG [RS:3;jenkins-hbase4:34765] zookeeper.ReadOnlyZKClient(139): Connect 0x504efecf to 127.0.0.1:51693 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-19 05:15:21,606 DEBUG [RS:3;jenkins-hbase4:34765] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@5e890ab6, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-19 05:15:21,606 DEBUG [RS:3;jenkins-hbase4:34765] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@5ab695bf, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-19 05:15:21,615 DEBUG [RS:3;jenkins-hbase4:34765] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:3;jenkins-hbase4:34765 2023-07-19 05:15:21,615 INFO [RS:3;jenkins-hbase4:34765] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-19 05:15:21,615 INFO [RS:3;jenkins-hbase4:34765] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-19 05:15:21,615 DEBUG [RS:3;jenkins-hbase4:34765] regionserver.HRegionServer(1022): About to register with Master. 2023-07-19 05:15:21,615 INFO [RS:3;jenkins-hbase4:34765] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,39261,1689743719572 with isa=jenkins-hbase4.apache.org/172.31.14.131:34765, startcode=1689743721451 2023-07-19 05:15:21,615 DEBUG [RS:3;jenkins-hbase4:34765] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-19 05:15:21,618 INFO [RS-EventLoopGroup-12-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:41023, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.10 (auth:SIMPLE), service=RegionServerStatusService 2023-07-19 05:15:21,618 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=39261] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,34765,1689743721451 2023-07-19 05:15:21,618 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,39261,1689743719572] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-19 05:15:21,619 DEBUG [RS:3;jenkins-hbase4:34765] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:44175/user/jenkins/test-data/760309a7-bdb9-f7fa-7419-a01ae2b00b2f 2023-07-19 05:15:21,619 DEBUG [RS:3;jenkins-hbase4:34765] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:44175 2023-07-19 05:15:21,619 DEBUG [RS:3;jenkins-hbase4:34765] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=44249 2023-07-19 05:15:21,623 DEBUG [Listener at localhost/42441-EventThread] zookeeper.ZKWatcher(600): regionserver:35969-0x1017c017df10003, quorum=127.0.0.1:51693, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-19 05:15:21,623 DEBUG [Listener at localhost/42441-EventThread] zookeeper.ZKWatcher(600): regionserver:41955-0x1017c017df10002, quorum=127.0.0.1:51693, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-19 05:15:21,623 DEBUG [Listener at localhost/42441-EventThread] zookeeper.ZKWatcher(600): master:39261-0x1017c017df10000, quorum=127.0.0.1:51693, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-19 05:15:21,623 DEBUG [Listener at localhost/42441-EventThread] zookeeper.ZKWatcher(600): regionserver:46065-0x1017c017df10001, quorum=127.0.0.1:51693, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-19 05:15:21,623 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,39261,1689743719572] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 05:15:21,624 DEBUG [RS:3;jenkins-hbase4:34765] zookeeper.ZKUtil(162): regionserver:34765-0x1017c017df1000b, quorum=127.0.0.1:51693, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,34765,1689743721451 2023-07-19 05:15:21,624 WARN [RS:3;jenkins-hbase4:34765] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-19 05:15:21,624 INFO [RS:3;jenkins-hbase4:34765] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-19 05:15:21,624 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:35969-0x1017c017df10003, quorum=127.0.0.1:51693, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,46065,1689743719751 2023-07-19 05:15:21,624 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,39261,1689743719572] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-19 05:15:21,624 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:46065-0x1017c017df10001, quorum=127.0.0.1:51693, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,46065,1689743719751 2023-07-19 05:15:21,624 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:41955-0x1017c017df10002, quorum=127.0.0.1:51693, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,46065,1689743719751 2023-07-19 05:15:21,624 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,34765,1689743721451] 2023-07-19 05:15:21,624 DEBUG [RS:3;jenkins-hbase4:34765] regionserver.HRegionServer(1948): logDir=hdfs://localhost:44175/user/jenkins/test-data/760309a7-bdb9-f7fa-7419-a01ae2b00b2f/WALs/jenkins-hbase4.apache.org,34765,1689743721451 2023-07-19 05:15:21,624 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:35969-0x1017c017df10003, quorum=127.0.0.1:51693, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,34765,1689743721451 2023-07-19 05:15:21,626 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,39261,1689743719572] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 4 2023-07-19 05:15:21,626 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:35969-0x1017c017df10003, quorum=127.0.0.1:51693, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,41955,1689743719898 2023-07-19 05:15:21,626 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:46065-0x1017c017df10001, quorum=127.0.0.1:51693, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,34765,1689743721451 2023-07-19 05:15:21,626 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:41955-0x1017c017df10002, quorum=127.0.0.1:51693, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,34765,1689743721451 2023-07-19 05:15:21,629 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:46065-0x1017c017df10001, quorum=127.0.0.1:51693, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,41955,1689743719898 2023-07-19 05:15:21,629 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:35969-0x1017c017df10003, quorum=127.0.0.1:51693, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,35969,1689743720058 2023-07-19 05:15:21,629 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:41955-0x1017c017df10002, quorum=127.0.0.1:51693, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,41955,1689743719898 2023-07-19 05:15:21,630 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:46065-0x1017c017df10001, quorum=127.0.0.1:51693, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,35969,1689743720058 2023-07-19 05:15:21,630 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:41955-0x1017c017df10002, quorum=127.0.0.1:51693, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,35969,1689743720058 2023-07-19 05:15:21,631 DEBUG [RS:3;jenkins-hbase4:34765] zookeeper.ZKUtil(162): regionserver:34765-0x1017c017df1000b, quorum=127.0.0.1:51693, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,46065,1689743719751 2023-07-19 05:15:21,631 DEBUG [RS:3;jenkins-hbase4:34765] zookeeper.ZKUtil(162): regionserver:34765-0x1017c017df1000b, quorum=127.0.0.1:51693, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,34765,1689743721451 2023-07-19 05:15:21,632 DEBUG [RS:3;jenkins-hbase4:34765] zookeeper.ZKUtil(162): regionserver:34765-0x1017c017df1000b, quorum=127.0.0.1:51693, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,41955,1689743719898 2023-07-19 05:15:21,632 DEBUG [RS:3;jenkins-hbase4:34765] zookeeper.ZKUtil(162): regionserver:34765-0x1017c017df1000b, quorum=127.0.0.1:51693, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,35969,1689743720058 2023-07-19 05:15:21,633 DEBUG [RS:3;jenkins-hbase4:34765] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-19 05:15:21,633 INFO [RS:3;jenkins-hbase4:34765] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-19 05:15:21,635 INFO [RS:3;jenkins-hbase4:34765] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-19 05:15:21,635 INFO [RS:3;jenkins-hbase4:34765] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-19 05:15:21,635 INFO [RS:3;jenkins-hbase4:34765] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-19 05:15:21,635 INFO [RS:3;jenkins-hbase4:34765] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-19 05:15:21,637 INFO [RS:3;jenkins-hbase4:34765] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-19 05:15:21,638 DEBUG [RS:3;jenkins-hbase4:34765] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-19 05:15:21,638 DEBUG [RS:3;jenkins-hbase4:34765] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-19 05:15:21,638 DEBUG [RS:3;jenkins-hbase4:34765] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-19 05:15:21,638 DEBUG [RS:3;jenkins-hbase4:34765] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-19 05:15:21,638 DEBUG [RS:3;jenkins-hbase4:34765] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-19 05:15:21,638 DEBUG [RS:3;jenkins-hbase4:34765] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-19 05:15:21,638 DEBUG [RS:3;jenkins-hbase4:34765] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-19 05:15:21,638 DEBUG [RS:3;jenkins-hbase4:34765] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-19 05:15:21,638 DEBUG [RS:3;jenkins-hbase4:34765] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-19 05:15:21,638 DEBUG [RS:3;jenkins-hbase4:34765] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-19 05:15:21,639 INFO [RS:3;jenkins-hbase4:34765] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-19 05:15:21,639 INFO [RS:3;jenkins-hbase4:34765] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-19 05:15:21,639 INFO [RS:3;jenkins-hbase4:34765] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-19 05:15:21,650 INFO [RS:3;jenkins-hbase4:34765] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-19 05:15:21,650 INFO [RS:3;jenkins-hbase4:34765] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,34765,1689743721451-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-19 05:15:21,661 INFO [RS:3;jenkins-hbase4:34765] regionserver.Replication(203): jenkins-hbase4.apache.org,34765,1689743721451 started 2023-07-19 05:15:21,661 INFO [RS:3;jenkins-hbase4:34765] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,34765,1689743721451, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:34765, sessionid=0x1017c017df1000b 2023-07-19 05:15:21,661 DEBUG [RS:3;jenkins-hbase4:34765] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-19 05:15:21,661 DEBUG [RS:3;jenkins-hbase4:34765] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,34765,1689743721451 2023-07-19 05:15:21,661 DEBUG [RS:3;jenkins-hbase4:34765] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,34765,1689743721451' 2023-07-19 05:15:21,661 DEBUG [RS:3;jenkins-hbase4:34765] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-19 05:15:21,661 DEBUG [RS:3;jenkins-hbase4:34765] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-19 05:15:21,662 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39261] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-19 05:15:21,662 DEBUG [RS:3;jenkins-hbase4:34765] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-19 05:15:21,662 DEBUG [RS:3;jenkins-hbase4:34765] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-19 05:15:21,662 DEBUG [RS:3;jenkins-hbase4:34765] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,34765,1689743721451 2023-07-19 05:15:21,662 DEBUG [RS:3;jenkins-hbase4:34765] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,34765,1689743721451' 2023-07-19 05:15:21,662 DEBUG [RS:3;jenkins-hbase4:34765] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-19 05:15:21,662 DEBUG [RS:3;jenkins-hbase4:34765] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-19 05:15:21,663 DEBUG [RS:3;jenkins-hbase4:34765] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-19 05:15:21,663 INFO [RS:3;jenkins-hbase4:34765] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-19 05:15:21,663 INFO [RS:3;jenkins-hbase4:34765] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-19 05:15:21,664 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39261] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 05:15:21,665 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39261] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-19 05:15:21,673 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39261] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-19 05:15:21,676 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39261] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-19 05:15:21,678 DEBUG [hconnection-0x185e6e04-metaLookup-shared--pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-19 05:15:21,680 INFO [RS-EventLoopGroup-13-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:51442, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-19 05:15:21,685 DEBUG [hconnection-0x185e6e04-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-19 05:15:21,688 INFO [RS-EventLoopGroup-15-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:38446, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-19 05:15:21,690 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39261] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-19 05:15:21,690 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39261] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-19 05:15:21,693 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39261] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:39261] to rsgroup master 2023-07-19 05:15:21,693 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39261] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:39261 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-19 05:15:21,693 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39261] ipc.CallRunner(144): callId: 20 service: MasterService methodName: ExecMasterService size: 118 connection: 172.31.14.131:37146 deadline: 1689744921693, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:39261 is either offline or it does not exist. 2023-07-19 05:15:21,693 WARN [Listener at localhost/42441] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:39261 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:39261 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-19 05:15:21,695 INFO [Listener at localhost/42441] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-19 05:15:21,695 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39261] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-19 05:15:21,696 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39261] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-19 05:15:21,696 INFO [Listener at localhost/42441] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:34765, jenkins-hbase4.apache.org:35969, jenkins-hbase4.apache.org:41955, jenkins-hbase4.apache.org:46065], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-19 05:15:21,697 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39261] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-19 05:15:21,697 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39261] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-19 05:15:21,749 INFO [Listener at localhost/42441] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testRSGroupListDoesNotContainFailedTableCreation Thread=559 (was 513) Potentially hanging thread: pool-548-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 2 on default port 44175 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,37145,1689743713754 java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread.run(RSGroupInfoManagerImpl.java:797) Potentially hanging thread: RS-EventLoopGroup-12-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins.hfs.7@localhost:44175 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer-30 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: RS-EventLoopGroup-15-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=35969 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=46065 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46065 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/4e3b5c94-7503-0e39-0800-596c759cd5e5/cluster_505e0513-5acf-88d4-b472-60a449ac459e/dfs/data/data5/current/BP-983488047-172.31.14.131-1689743718450 java.lang.Thread.sleep(Native Method) org.apache.hadoop.fs.CachingGetSpaceUsed$RefreshThread.run(CachingGetSpaceUsed.java:179) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: pool-542-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Session-HouseKeeper-7a3b6c88-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.hdfs.server.namenode.FSNamesystem$NameNodeEditLogRoller@4e967fc9 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.server.namenode.FSNamesystem$NameNodeEditLogRoller.run(FSNamesystem.java:3884) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.metaPriority.FPBQ.Fifo.handler=0,queue=0,port=34765 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RS-EventLoopGroup-16-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: 1884557439@qtp-1127510542-0 java.lang.Object.wait(Native Method) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:626) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1113416655_17 at /127.0.0.1:47348 [Receiving block BP-983488047-172.31.14.131-1689743718450:blk_1073741833_1009] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server idle connection scanner for port 33953 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=41955 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: IPC Client (340318198) connection to localhost/127.0.0.1:44175 from jenkins java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/4e3b5c94-7503-0e39-0800-596c759cd5e5/cluster_505e0513-5acf-88d4-b472-60a449ac459e/dfs/data/data4/current/BP-983488047-172.31.14.131-1689743718450 java.lang.Thread.sleep(Native Method) org.apache.hadoop.fs.CachingGetSpaceUsed$RefreshThread.run(CachingGetSpaceUsed.java:179) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.hdfs.server.namenode.LeaseManager$Monitor@5f983009 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.server.namenode.LeaseManager$Monitor.run(LeaseManager.java:528) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: VolumeScannerThread(/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/4e3b5c94-7503-0e39-0800-596c759cd5e5/cluster_505e0513-5acf-88d4-b472-60a449ac459e/dfs/data/data3) java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:627) Potentially hanging thread: master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1689743720452 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.PriorityBlockingQueue.take(PriorityBlockingQueue.java:549) org.apache.hadoop.hbase.master.cleaner.HFileCleaner.consumerLoop(HFileCleaner.java:267) org.apache.hadoop.hbase.master.cleaner.HFileCleaner$2.run(HFileCleaner.java:251) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1113416655_17 at /127.0.0.1:59620 [Receiving block BP-983488047-172.31.14.131-1689743718450:blk_1073741833_1009] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost:44175/user/jenkins/test-data/760309a7-bdb9-f7fa-7419-a01ae2b00b2f-prefix:jenkins-hbase4.apache.org,41955,1689743719898 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/42441-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/4e3b5c94-7503-0e39-0800-596c759cd5e5/cluster_505e0513-5acf-88d4-b472-60a449ac459e/dfs/data/data3/current/BP-983488047-172.31.14.131-1689743718450 java.lang.Thread.sleep(Native Method) org.apache.hadoop.fs.CachingGetSpaceUsed$RefreshThread.run(CachingGetSpaceUsed.java:179) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x3af88b82-shared-pool-2 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1127606976-2319 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/779537082.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp76658060-2247 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer-31 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_909613845_17 at /127.0.0.1:60518 [Receiving block BP-983488047-172.31.14.131-1689743718450:blk_1073741832_1008] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 1 on default port 42441 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: IPC Client (340318198) connection to localhost/127.0.0.1:44175 from jenkins.hfs.7 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-466755018_17 at /127.0.0.1:47362 [Receiving block BP-983488047-172.31.14.131-1689743718450:blk_1073741834_1010] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-983488047-172.31.14.131-1689743718450:blk_1073741832_1008, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:51693@0x27fae21c-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: IPC Client (340318198) connection to localhost/127.0.0.1:39859 from jenkins java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: hconnection-0x3af88b82-shared-pool-3 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=46065 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: 319509370@qtp-1341171297-0 java.lang.Object.wait(Native Method) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:626) Potentially hanging thread: org.apache.hadoop.hdfs.server.blockmanagement.HeartbeatManager$Monitor@44116af1 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.server.blockmanagement.HeartbeatManager$Monitor.run(HeartbeatManager.java:451) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-9-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1430030801-2278 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-983488047-172.31.14.131-1689743718450:blk_1073741835_1011, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: pool-547-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:58776@0x49b7c08e sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/1181912059.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:1;jenkins-hbase4:41955-longCompactions-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.util.StealJobQueue.take(StealJobQueue.java:101) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer-33 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-856903249_17 at /127.0.0.1:47322 [Receiving block BP-983488047-172.31.14.131-1689743718450:blk_1073741829_1005] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1397505446-2219 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/4e3b5c94-7503-0e39-0800-596c759cd5e5/cluster_505e0513-5acf-88d4-b472-60a449ac459e/dfs/data/data2/current/BP-983488047-172.31.14.131-1689743718450 java.lang.Thread.sleep(Native Method) org.apache.hadoop.fs.CachingGetSpaceUsed$RefreshThread.run(CachingGetSpaceUsed.java:179) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1026677558-2588 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: pool-549-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: M:0;jenkins-hbase4:39261 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:81) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:64) org.apache.hadoop.hbase.master.HMaster.waitForMasterActive(HMaster.java:634) org.apache.hadoop.hbase.regionserver.HRegionServer.initializeZooKeeper(HRegionServer.java:957) org.apache.hadoop.hbase.regionserver.HRegionServer.preRegistrationInitialization(HRegionServer.java:904) org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1006) org.apache.hadoop.hbase.master.HMaster.run(HMaster.java:541) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:3;jenkins-hbase4:34765 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:81) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:64) org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1092) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.runRegionServer(MiniHBaseCluster.java:175) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.access$000(MiniHBaseCluster.java:123) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer$1.run(MiniHBaseCluster.java:159) java.security.AccessController.doPrivileged(Native Method) javax.security.auth.Subject.doAs(Subject.java:360) org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1873) org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:319) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.run(MiniHBaseCluster.java:156) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1127606976-2321 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/779537082.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,39261,1689743719572 java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread.run(RSGroupInfoManagerImpl.java:797) Potentially hanging thread: 917465602@qtp-1127510542-1 - Acceptor0 HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:43567 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.mortbay.io.nio.SelectorManager$SelectSet.doSelect(SelectorManager.java:498) org.mortbay.io.nio.SelectorManager.doSelect(SelectorManager.java:192) org.mortbay.jetty.nio.SelectChannelConnector.accept(SelectChannelConnector.java:124) org.mortbay.jetty.AbstractConnector$Acceptor.run(AbstractConnector.java:708) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582) Potentially hanging thread: RpcServer.metaPriority.FPBQ.Fifo.handler=0,queue=0,port=35969 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=39261 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp117890833-2314 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x3af88b82-metaLookup-shared--pool-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ProcessThread(sid:0 cport:51693): sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.server.PrepRequestProcessor.run(PrepRequestProcessor.java:134) Potentially hanging thread: VolumeScannerThread(/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/4e3b5c94-7503-0e39-0800-596c759cd5e5/cluster_505e0513-5acf-88d4-b472-60a449ac459e/dfs/data/data2) java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:627) Potentially hanging thread: org.apache.hadoop.hdfs.server.namenode.FSNamesystem$NameNodeResourceMonitor@332b35bb java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.server.namenode.FSNamesystem$NameNodeResourceMonitor.run(FSNamesystem.java:3842) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 4 on default port 44175 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-856903249_17 at /127.0.0.1:59584 [Receiving block BP-983488047-172.31.14.131-1689743718450:blk_1073741829_1005] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1026677558-2584 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/779537082.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41955 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: Timer-32 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: pool-544-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x3af88b82-shared-pool-5 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: 940687611@qtp-1341171297-1 - Acceptor0 HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:45235 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.mortbay.io.nio.SelectorManager$SelectSet.doSelect(SelectorManager.java:498) org.mortbay.io.nio.SelectorManager.doSelect(SelectorManager.java:192) org.mortbay.jetty.nio.SelectChannelConnector.accept(SelectChannelConnector.java:124) org.mortbay.jetty.AbstractConnector$Acceptor.run(AbstractConnector.java:708) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582) Potentially hanging thread: LeaseRenewer:jenkins.hfs.6@localhost:39859 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:51693@0x504efecf-SendThread(127.0.0.1:51693) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: qtp117890833-2311 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: java.util.concurrent.ThreadPoolExecutor$Worker@118c314b[State = -1, empty queue] sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server idle connection scanner for port 44175 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=39261 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: Timer-29 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: RS:2;jenkins-hbase4:35969-longCompactions-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.util.StealJobQueue.take(StealJobQueue.java:101) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/42441.LruBlockCache.EvictionThread java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.io.hfile.LruBlockCache$EvictionThread.run(LruBlockCache.java:902) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-466755018_17 at /127.0.0.1:59634 [Receiving block BP-983488047-172.31.14.131-1689743718450:blk_1073741834_1010] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-11-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 1 on default port 33953 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1113416655_17 at /127.0.0.1:60534 [Receiving block BP-983488047-172.31.14.131-1689743718450:blk_1073741833_1009] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-14-1 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:803) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:457) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1113416655_17 at /127.0.0.1:60548 [Waiting for operation #2] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (340318198) connection to localhost/127.0.0.1:44175 from jenkins.hfs.8 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=35969 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1689743720452 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.util.StealJobQueue.take(StealJobQueue.java:101) org.apache.hadoop.hbase.master.cleaner.HFileCleaner.consumerLoop(HFileCleaner.java:267) org.apache.hadoop.hbase.master.cleaner.HFileCleaner$1.run(HFileCleaner.java:236) Potentially hanging thread: RS-EventLoopGroup-14-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-16 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-9-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=39261 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RS-EventLoopGroup-14-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-16-1 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:803) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:457) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: BP-983488047-172.31.14.131-1689743718450 heartbeating to localhost/127.0.0.1:44175 java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.IncrementalBlockReportManager.waitTillNextIBR(IncrementalBlockReportManager.java:158) org.apache.hadoop.hdfs.server.datanode.BPServiceActor.offerService(BPServiceActor.java:715) org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:846) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (340318198) connection to localhost/127.0.0.1:39859 from jenkins.hfs.6 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: qtp1127606976-2320 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/779537082.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1127606976-2324 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-13-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=39261 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: PacketResponder: BP-983488047-172.31.14.131-1689743718450:blk_1073741833_1009, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Session-HouseKeeper-6e740e22-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34765 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RS:0;jenkins-hbase4:46065-longCompactions-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.util.StealJobQueue.take(StealJobQueue.java:101) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x3af88b82-shared-pool-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 0 on default port 42441 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: Timer-34 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost:44175/user/jenkins/test-data/760309a7-bdb9-f7fa-7419-a01ae2b00b2f-prefix:jenkins-hbase4.apache.org,35969,1689743720058 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:3;jenkins-hbase4:34765-longCompactions-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.util.StealJobQueue.take(StealJobQueue.java:101) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x3af88b82-shared-pool-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-466755018_17 at /127.0.0.1:47396 [Waiting for operation #2] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x3af88b82-shared-pool-4 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:51693@0x0ba13049-SendThread(127.0.0.1:51693) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: Listener at localhost/42441-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost:44175/user/jenkins/test-data/760309a7-bdb9-f7fa-7419-a01ae2b00b2f/MasterData-prefix:jenkins-hbase4.apache.org,39261,1689743719572 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.metaPriority.FPBQ.Fifo.handler=0,queue=0,port=41955 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: IPC Server handler 4 on default port 46769 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=34765 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: ForkJoinPool-2-worker-5 sun.misc.Unsafe.park(Native Method) java.util.concurrent.ForkJoinPool.awaitWork(ForkJoinPool.java:1824) java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1693) java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:175) Potentially hanging thread: RS-EventLoopGroup-10-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer-27 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: RpcServer.metaPriority.FPBQ.Fifo.handler=0,queue=0,port=39261 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=35969 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=35969 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:51693@0x2aae53b8-SendThread(127.0.0.1:51693) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: IPC Server handler 3 on default port 44175 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: PacketResponder: BP-983488047-172.31.14.131-1689743718450:blk_1073741834_1010, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:51693@0x27fae21c-SendThread(127.0.0.1:51693) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: Listener at localhost/42441.LruBlockCache.EvictionThread java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.io.hfile.LruBlockCache$EvictionThread.run(LruBlockCache.java:902) Potentially hanging thread: IPC Server idle connection scanner for port 46769 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: jenkins-hbase4:34765Replication Statistics #0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=41955 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp117890833-2313 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:51693@0x0ba13049-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: Listener at localhost/42441-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: BP-983488047-172.31.14.131-1689743718450 heartbeating to localhost/127.0.0.1:44175 java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.IncrementalBlockReportManager.waitTillNextIBR(IncrementalBlockReportManager.java:158) org.apache.hadoop.hdfs.server.datanode.BPServiceActor.offerService(BPServiceActor.java:715) org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:846) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-8-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-9-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-983488047-172.31.14.131-1689743718450:blk_1073741829_1005, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/4e3b5c94-7503-0e39-0800-596c759cd5e5/cluster_505e0513-5acf-88d4-b472-60a449ac459e/dfs/data/data1/current/BP-983488047-172.31.14.131-1689743718450 java.lang.Thread.sleep(Native Method) org.apache.hadoop.fs.CachingGetSpaceUsed$RefreshThread.run(CachingGetSpaceUsed.java:179) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-466755018_17 at /127.0.0.1:60536 [Receiving block BP-983488047-172.31.14.131-1689743718450:blk_1073741834_1010] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=46065 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: org.apache.hadoop.hdfs.server.namenode.FSNamesystem$LazyPersistFileScrubber@703c0b8f java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.server.namenode.FSNamesystem$LazyPersistFileScrubber.run(FSNamesystem.java:3975) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: jenkins-hbase4:39261 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.master.assignment.AssignmentManager.waitOnAssignQueue(AssignmentManager.java:2102) org.apache.hadoop.hbase.master.assignment.AssignmentManager.processAssignQueue(AssignmentManager.java:2124) org.apache.hadoop.hbase.master.assignment.AssignmentManager.access$600(AssignmentManager.java:104) org.apache.hadoop.hbase.master.assignment.AssignmentManager$1.run(AssignmentManager.java:2064) Potentially hanging thread: IPC Server handler 2 on default port 33953 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: 919478019@qtp-2136289530-0 java.lang.Object.wait(Native Method) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:626) Potentially hanging thread: java.util.concurrent.ThreadPoolExecutor$Worker@14b0ec76[State = -1, empty queue] sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39261 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: org.apache.hadoop.util.JvmPauseMonitor$Monitor@3c610f68 java.lang.Thread.sleep(Native Method) org.apache.hadoop.util.JvmPauseMonitor$Monitor.run(JvmPauseMonitor.java:192) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1430030801-2282 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.util.JvmPauseMonitor$Monitor@61d85dc9 java.lang.Thread.sleep(Native Method) org.apache.hadoop.util.JvmPauseMonitor$Monitor.run(JvmPauseMonitor.java:192) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.hdfs.server.datanode.DataXceiverServer@6fd4fb47 sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:113) org.apache.hadoop.hdfs.net.TcpPeerServer.accept(TcpPeerServer.java:85) org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:145) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer-35 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: IPC Client (340318198) connection to localhost/127.0.0.1:44175 from jenkins java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: org.apache.hadoop.hdfs.server.datanode.DataXceiverServer@70e44792 sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:113) org.apache.hadoop.hdfs.net.TcpPeerServer.accept(TcpPeerServer.java:85) org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:145) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:51693@0x504efecf-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: pool-567-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-983488047-172.31.14.131-1689743718450:blk_1073741834_1010, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ForkJoinPool-2-worker-6 sun.misc.Unsafe.park(Native Method) java.util.concurrent.ForkJoinPool.awaitWork(ForkJoinPool.java:1824) java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1693) java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:175) Potentially hanging thread: qtp1026677558-2591 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-983488047-172.31.14.131-1689743718450:blk_1073741835_1011, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1026677558-2589 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1397505446-2218 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 3 on default port 42441 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=41955 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: VolumeScannerThread(/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/4e3b5c94-7503-0e39-0800-596c759cd5e5/cluster_505e0513-5acf-88d4-b472-60a449ac459e/dfs/data/data1) java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:627) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=34765 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp76658060-2250 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:51693@0x46ee48d8-SendThread(127.0.0.1:51693) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: qtp1397505446-2221 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_909613845_17 at /127.0.0.1:52274 [Waiting for operation #4] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-16-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp117890833-2307 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/779537082.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: jenkins-hbase4:41955Replication Statistics #0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-13-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-12-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Session-HouseKeeper-76de7ed7-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:51693@0x46ee48d8 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/1181912059.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: BP-983488047-172.31.14.131-1689743718450 heartbeating to localhost/127.0.0.1:44175 java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.IncrementalBlockReportManager.waitTillNextIBR(IncrementalBlockReportManager.java:158) org.apache.hadoop.hdfs.server.datanode.BPServiceActor.offerService(BPServiceActor.java:715) org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:846) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp76658060-2252 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1430030801-2281 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-8-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins.hfs.4@localhost:39859 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1397505446-2217 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35969 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp1397505446-2215-acceptor-0@5986c2ae-ServerConnector@147da2bf{HTTP/1.1, (http/1.1)}{0.0.0.0:44249} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_909613845_17 at /127.0.0.1:59604 [Receiving block BP-983488047-172.31.14.131-1689743718450:blk_1073741832_1008] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=41955 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: hconnection-0x185e6e04-metaLookup-shared--pool-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 3 on default port 33953 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: PacketResponder: BP-983488047-172.31.14.131-1689743718450:blk_1073741834_1010, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 1 on default port 44175 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=34765 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp1430030801-2276-acceptor-0@698eb4d5-ServerConnector@802ef93{HTTP/1.1, (http/1.1)}{0.0.0.0:39657} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp76658060-2245 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/779537082.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:58776@0x49b7c08e-SendThread(127.0.0.1:58776) java.lang.Thread.sleep(Native Method) org.apache.zookeeper.client.StaticHostProvider.next(StaticHostProvider.java:369) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1137) Potentially hanging thread: qtp1397505446-2220 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:1;jenkins-hbase4:41955 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:81) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:64) org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1092) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.runRegionServer(MiniHBaseCluster.java:175) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.access$000(MiniHBaseCluster.java:123) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer$1.run(MiniHBaseCluster.java:159) java.security.AccessController.doPrivileged(Native Method) javax.security.auth.Subject.doAs(Subject.java:360) org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1873) org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:319) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.run(MiniHBaseCluster.java:156) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1430030801-2277 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: CacheReplicationMonitor(303189071) sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2163) org.apache.hadoop.hdfs.server.blockmanagement.CacheReplicationMonitor.run(CacheReplicationMonitor.java:181) Potentially hanging thread: RS-EventLoopGroup-12-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 4 on default port 33953 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: LeaseRenewer:jenkins@localhost:44175 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp76658060-2246-acceptor-0@79b8e9b6-ServerConnector@1e767489{HTTP/1.1, (http/1.1)}{0.0.0.0:40853} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: pool-558-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=35969 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:51693@0x290a981c-SendThread(127.0.0.1:51693) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: org.apache.hadoop.hdfs.server.blockmanagement.PendingReplicationBlocks$PendingReplicationMonitor@18e96d45 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.server.blockmanagement.PendingReplicationBlocks$PendingReplicationMonitor.run(PendingReplicationBlocks.java:244) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34765 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: ForkJoinPool-2-worker-7 sun.misc.Unsafe.park(Native Method) java.util.concurrent.ForkJoinPool.awaitWork(ForkJoinPool.java:1824) java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1693) java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:175) Potentially hanging thread: RS-EventLoopGroup-11-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (340318198) connection to localhost/127.0.0.1:39859 from jenkins.hfs.4 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost:44175/user/jenkins/test-data/760309a7-bdb9-f7fa-7419-a01ae2b00b2f-prefix:jenkins-hbase4.apache.org,46065,1689743719751.meta sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins.hfs.8@localhost:44175 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: VolumeScannerThread(/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/4e3b5c94-7503-0e39-0800-596c759cd5e5/cluster_505e0513-5acf-88d4-b472-60a449ac459e/dfs/data/data4) java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:627) Potentially hanging thread: Listener at localhost/42441-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost:44175/user/jenkins/test-data/760309a7-bdb9-f7fa-7419-a01ae2b00b2f-prefix:jenkins-hbase4.apache.org,46065,1689743719751 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=35969 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: Listener at localhost/42441 java.lang.Thread.dumpThreads(Native Method) java.lang.Thread.getAllStackTraces(Thread.java:1615) org.apache.hadoop.hbase.ResourceCheckerJUnitListener$ThreadResourceAnalyzer.getVal(ResourceCheckerJUnitListener.java:49) org.apache.hadoop.hbase.ResourceChecker.fill(ResourceChecker.java:110) org.apache.hadoop.hbase.ResourceChecker.fillEndings(ResourceChecker.java:104) org.apache.hadoop.hbase.ResourceChecker.end(ResourceChecker.java:206) org.apache.hadoop.hbase.ResourceCheckerJUnitListener.end(ResourceCheckerJUnitListener.java:165) org.apache.hadoop.hbase.ResourceCheckerJUnitListener.testFinished(ResourceCheckerJUnitListener.java:185) org.junit.runner.notification.SynchronizedRunListener.testFinished(SynchronizedRunListener.java:87) org.junit.runner.notification.RunNotifier$9.notifyListener(RunNotifier.java:225) org.junit.runner.notification.RunNotifier$SafeNotifier.run(RunNotifier.java:72) org.junit.runner.notification.RunNotifier.fireTestFinished(RunNotifier.java:222) org.junit.internal.runners.model.EachTestNotifier.fireTestFinished(EachTestNotifier.java:38) org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:372) org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) java.util.concurrent.FutureTask.run(FutureTask.java:266) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (340318198) connection to localhost/127.0.0.1:39859 from jenkins.hfs.5 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: qtp117890833-2310 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-11-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 0 on default port 46769 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1113416655_17 at /127.0.0.1:59638 [Receiving block BP-983488047-172.31.14.131-1689743718450:blk_1073741835_1011] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1430030801-2279 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 2 on default port 42441 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: RS-EventLoopGroup-13-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:51693@0x0ba13049 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/1181912059.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/43345-SendThread(127.0.0.1:58776) java.lang.Thread.sleep(Native Method) org.apache.zookeeper.client.StaticHostProvider.next(StaticHostProvider.java:369) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1137) Potentially hanging thread: nioEventLoopGroup-18-1 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:803) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:457) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-10-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-856903249_17 at /127.0.0.1:60480 [Receiving block BP-983488047-172.31.14.131-1689743718450:blk_1073741829_1005] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1127606976-2323 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=41955 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: LeaseRenewer:jenkins@localhost:39859 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:51693@0x27fae21c sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/1181912059.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/42441-SendThread(127.0.0.1:51693) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: pool-562-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 0 on default port 33953 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:51693@0x46ee48d8-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: NIOServerCxnFactory.AcceptThread:localhost/127.0.0.1:51693 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.zookeeper.server.NIOServerCnxnFactory$AcceptThread.select(NIOServerCnxnFactory.java:229) org.apache.zookeeper.server.NIOServerCnxnFactory$AcceptThread.run(NIOServerCnxnFactory.java:205) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=46065 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=46065 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: Listener at localhost/42441-SendThread(127.0.0.1:51693) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: org.apache.hadoop.hdfs.server.datanode.DataXceiverServer@18d37fe1 sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:113) org.apache.hadoop.hdfs.net.TcpPeerServer.accept(TcpPeerServer.java:85) org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:145) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/42441-SendThread(127.0.0.1:51693) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: RS:2;jenkins-hbase4:35969 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:81) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:64) org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1092) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.runRegionServer(MiniHBaseCluster.java:175) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.access$000(MiniHBaseCluster.java:123) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer$1.run(MiniHBaseCluster.java:159) java.security.AccessController.doPrivileged(Native Method) javax.security.auth.Subject.doAs(Subject.java:360) org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1873) org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:319) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.run(MiniHBaseCluster.java:156) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-11 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:879) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-983488047-172.31.14.131-1689743718450:blk_1073741832_1008, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server idle connection scanner for port 42441 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: java.util.concurrent.ThreadPoolExecutor$Worker@1842251f[State = -1, empty queue] sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp117890833-2312 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-14-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:51693@0x2aae53b8 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/1181912059.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=34765 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=39261 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-13 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: jenkins-hbase4:35969Replication Statistics #0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=35969 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: PacketResponder: BP-983488047-172.31.14.131-1689743718450:blk_1073741833_1009, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/42441.LruBlockCache.EvictionThread java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.io.hfile.LruBlockCache$EvictionThread.run(LruBlockCache.java:902) Potentially hanging thread: IPC Server handler 1 on default port 46769 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41955 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RS-EventLoopGroup-8-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp117890833-2308-acceptor-0@2d23bb8a-ServerConnector@478dee5b{HTTP/1.1, (http/1.1)}{0.0.0.0:35323} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ForkJoinPool-2-worker-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.ForkJoinPool.awaitWork(ForkJoinPool.java:1824) java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1693) java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:175) Potentially hanging thread: Session-HouseKeeper-1ae9c6e-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp76658060-2248 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:51693@0x290a981c sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/1181912059.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1127606976-2318 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/779537082.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer-24 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: pool-563-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=39261 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: Listener at localhost/43345-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: 58550340@qtp-822178126-1 - Acceptor0 HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:40231 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.mortbay.io.nio.SelectorManager$SelectSet.doSelect(SelectorManager.java:498) org.mortbay.io.nio.SelectorManager.doSelect(SelectorManager.java:192) org.mortbay.jetty.nio.SelectChannelConnector.accept(SelectChannelConnector.java:124) org.mortbay.jetty.AbstractConnector$Acceptor.run(AbstractConnector.java:708) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=34765 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: PacketResponder: BP-983488047-172.31.14.131-1689743718450:blk_1073741829_1005, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/42441-SendThread(127.0.0.1:51693) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: IPC Server handler 0 on default port 44175 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1113416655_17 at /127.0.0.1:60560 [Receiving block BP-983488047-172.31.14.131-1689743718450:blk_1073741835_1011] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/42441-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: Listener at localhost/42441-SendThread(127.0.0.1:51693) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=39261 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: IPC Server handler 2 on default port 46769 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: qtp1397505446-2214 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/779537082.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: VolumeScannerThread(/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/4e3b5c94-7503-0e39-0800-596c759cd5e5/cluster_505e0513-5acf-88d4-b472-60a449ac459e/dfs/data/data6) java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:627) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=41955 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-15 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: 1318963654@qtp-2136289530-1 - Acceptor0 HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:40815 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.mortbay.io.nio.SelectorManager$SelectSet.doSelect(SelectorManager.java:498) org.mortbay.io.nio.SelectorManager.doSelect(SelectorManager.java:192) org.mortbay.jetty.nio.SelectChannelConnector.accept(SelectChannelConnector.java:124) org.mortbay.jetty.AbstractConnector$Acceptor.run(AbstractConnector.java:708) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582) Potentially hanging thread: qtp1127606976-2325 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins.hfs.9@localhost:44175 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1127606976-2322-acceptor-0@3b6e9928-ServerConnector@4df21cd6{HTTP/1.1, (http/1.1)}{0.0.0.0:36379} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_909613845_17 at /127.0.0.1:47338 [Receiving block BP-983488047-172.31.14.131-1689743718450:blk_1073741832_1008] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-983488047-172.31.14.131-1689743718450:blk_1073741835_1011, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39261 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RS-EventLoopGroup-15-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/4e3b5c94-7503-0e39-0800-596c759cd5e5/cluster_505e0513-5acf-88d4-b472-60a449ac459e/dfs/data/data6/current/BP-983488047-172.31.14.131-1689743718450 java.lang.Thread.sleep(Native Method) org.apache.hadoop.fs.CachingGetSpaceUsed$RefreshThread.run(CachingGetSpaceUsed.java:179) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (340318198) connection to localhost/127.0.0.1:39859 from jenkins java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: hconnection-0x185e6e04-shared-pool-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1026677558-2587 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 4 on default port 42441 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: LeaseRenewer:jenkins.hfs.5@localhost:39859 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1430030801-2280 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:51693@0x788676c3-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: RS-EventLoopGroup-15-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: pool-554-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:0;jenkins-hbase4:46065 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:81) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:64) org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1092) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.runRegionServer(MiniHBaseCluster.java:175) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.access$000(MiniHBaseCluster.java:123) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer$1.run(MiniHBaseCluster.java:159) java.security.AccessController.doPrivileged(Native Method) javax.security.auth.Subject.doAs(Subject.java:360) org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1873) org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:319) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.run(MiniHBaseCluster.java:156) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-983488047-172.31.14.131-1689743718450:blk_1073741829_1005, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: jenkins-hbase4:46065Replication Statistics #0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1026677558-2586 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-14 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-10-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: VolumeScannerThread(/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/4e3b5c94-7503-0e39-0800-596c759cd5e5/cluster_505e0513-5acf-88d4-b472-60a449ac459e/dfs/data/data5) java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:627) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=34765 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: org.apache.hadoop.util.JvmPauseMonitor$Monitor@2dae6e67 java.lang.Thread.sleep(Native Method) org.apache.hadoop.util.JvmPauseMonitor$Monitor.run(JvmPauseMonitor.java:192) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1397505446-2216 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:51693@0x290a981c-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=46065 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: IPC Server handler 3 on default port 46769 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: qtp1026677558-2590 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x3af88b82-metaLookup-shared--pool-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp117890833-2309 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:51693@0x788676c3-SendThread(127.0.0.1:51693) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: Timer-28 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: hconnection-0xcbd9471-shared-pool-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=41955 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:58776@0x49b7c08e-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:51693@0x504efecf sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/1181912059.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1430030801-2275 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/779537082.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Session-HouseKeeper-4de0c4b7-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer-25 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: IPC Client (340318198) connection to localhost/127.0.0.1:44175 from jenkins.hfs.10 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: PacketResponder: BP-983488047-172.31.14.131-1689743718450:blk_1073741833_1009, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-12 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:879) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:51693@0x2aae53b8-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: qtp76658060-2251 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=46065 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1113416655_17 at /127.0.0.1:47366 [Receiving block BP-983488047-172.31.14.131-1689743718450:blk_1073741835_1011] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=46065 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: Listener at localhost/42441.LruBlockCache.EvictionThread java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.io.hfile.LruBlockCache$EvictionThread.run(LruBlockCache.java:902) Potentially hanging thread: qtp1026677558-2585-acceptor-0@373ed10c-ServerConnector@4a3ed2e7{HTTP/1.1, (http/1.1)}{0.0.0.0:37227} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-983488047-172.31.14.131-1689743718450:blk_1073741832_1008, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:51693@0x788676c3 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/1181912059.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.metaPriority.FPBQ.Fifo.handler=0,queue=0,port=46065 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=34765 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: org.apache.hadoop.util.JvmPauseMonitor$Monitor@6880b315 java.lang.Thread.sleep(Native Method) org.apache.hadoop.util.JvmPauseMonitor$Monitor.run(JvmPauseMonitor.java:192) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer-26 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: 1987527070@qtp-822178126-0 java.lang.Object.wait(Native Method) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:626) Potentially hanging thread: Listener at localhost/42441-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: pool-553-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/42441-SendThread(127.0.0.1:51693) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: IPC Client (340318198) connection to localhost/127.0.0.1:44175 from jenkins.hfs.9 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=35969 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp76658060-2249 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=826 (was 809) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=397 (was 387) - SystemLoadAverage LEAK? -, ProcessCount=171 (was 171), AvailableMemoryMB=4904 (was 5101) 2023-07-19 05:15:21,753 WARN [Listener at localhost/42441] hbase.ResourceChecker(130): Thread=559 is superior to 500 2023-07-19 05:15:21,765 INFO [RS:3;jenkins-hbase4:34765] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C34765%2C1689743721451, suffix=, logDir=hdfs://localhost:44175/user/jenkins/test-data/760309a7-bdb9-f7fa-7419-a01ae2b00b2f/WALs/jenkins-hbase4.apache.org,34765,1689743721451, archiveDir=hdfs://localhost:44175/user/jenkins/test-data/760309a7-bdb9-f7fa-7419-a01ae2b00b2f/oldWALs, maxLogs=32 2023-07-19 05:15:21,775 INFO [Listener at localhost/42441] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testNotMoveTableToNullRSGroupWhenCreatingExistingTable Thread=559, OpenFileDescriptor=826, MaxFileDescriptor=60000, SystemLoadAverage=397, ProcessCount=171, AvailableMemoryMB=4900 2023-07-19 05:15:21,775 WARN [Listener at localhost/42441] hbase.ResourceChecker(130): Thread=559 is superior to 500 2023-07-19 05:15:21,776 INFO [Listener at localhost/42441] rsgroup.TestRSGroupsBase(132): testNotMoveTableToNullRSGroupWhenCreatingExistingTable 2023-07-19 05:15:21,785 DEBUG [RS-EventLoopGroup-16-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:38475,DS-74be87a0-13f3-435a-adac-f9bd6c929921,DISK] 2023-07-19 05:15:21,785 DEBUG [RS-EventLoopGroup-16-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:41985,DS-345d74be-872e-4d52-9cb2-51f897ffa631,DISK] 2023-07-19 05:15:21,790 DEBUG [RS-EventLoopGroup-16-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:35727,DS-f3cc29e5-9316-4bf7-8e06-c992433b44f8,DISK] 2023-07-19 05:15:21,794 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39261] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-19 05:15:21,794 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39261] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-19 05:15:21,795 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39261] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-19 05:15:21,795 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39261] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-19 05:15:21,795 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39261] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-19 05:15:21,796 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39261] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-19 05:15:21,796 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39261] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-19 05:15:21,797 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39261] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-19 05:15:21,799 INFO [RS:3;jenkins-hbase4:34765] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/760309a7-bdb9-f7fa-7419-a01ae2b00b2f/WALs/jenkins-hbase4.apache.org,34765,1689743721451/jenkins-hbase4.apache.org%2C34765%2C1689743721451.1689743721766 2023-07-19 05:15:21,802 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39261] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 05:15:21,802 DEBUG [RS:3;jenkins-hbase4:34765] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:38475,DS-74be87a0-13f3-435a-adac-f9bd6c929921,DISK], DatanodeInfoWithStorage[127.0.0.1:41985,DS-345d74be-872e-4d52-9cb2-51f897ffa631,DISK], DatanodeInfoWithStorage[127.0.0.1:35727,DS-f3cc29e5-9316-4bf7-8e06-c992433b44f8,DISK]] 2023-07-19 05:15:21,803 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39261] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-19 05:15:21,804 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39261] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-19 05:15:21,807 INFO [Listener at localhost/42441] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-19 05:15:21,808 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39261] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-19 05:15:21,809 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39261] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 05:15:21,810 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39261] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-19 05:15:21,811 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39261] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-19 05:15:21,813 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39261] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-19 05:15:21,815 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39261] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-19 05:15:21,816 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39261] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-19 05:15:21,822 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39261] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:39261] to rsgroup master 2023-07-19 05:15:21,822 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39261] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:39261 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-19 05:15:21,822 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39261] ipc.CallRunner(144): callId: 48 service: MasterService methodName: ExecMasterService size: 118 connection: 172.31.14.131:37146 deadline: 1689744921821, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:39261 is either offline or it does not exist. 2023-07-19 05:15:21,822 WARN [Listener at localhost/42441] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:39261 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:39261 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-19 05:15:21,824 INFO [Listener at localhost/42441] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-19 05:15:21,825 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39261] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-19 05:15:21,825 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39261] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-19 05:15:21,825 INFO [Listener at localhost/42441] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:34765, jenkins-hbase4.apache.org:35969, jenkins-hbase4.apache.org:41955, jenkins-hbase4.apache.org:46065], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-19 05:15:21,826 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39261] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-19 05:15:21,826 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39261] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-19 05:15:21,828 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39261] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 't1', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'cf1', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-19 05:15:21,829 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39261] procedure2.ProcedureExecutor(1029): Stored pid=12, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=t1 2023-07-19 05:15:21,831 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=t1 execute state=CREATE_TABLE_PRE_OPERATION 2023-07-19 05:15:21,831 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39261] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "t1" procId is: 12 2023-07-19 05:15:21,832 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39261] master.MasterRpcServices(1230): Checking to see if procedure is done pid=12 2023-07-19 05:15:21,832 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 05:15:21,833 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-19 05:15:21,833 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-19 05:15:21,835 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=t1 execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-19 05:15:21,837 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:44175/user/jenkins/test-data/760309a7-bdb9-f7fa-7419-a01ae2b00b2f/.tmp/data/default/t1/65c28a92c7a709bf991ad267a3651fa9 2023-07-19 05:15:21,838 DEBUG [HFileArchiver-4] backup.HFileArchiver(153): Directory hdfs://localhost:44175/user/jenkins/test-data/760309a7-bdb9-f7fa-7419-a01ae2b00b2f/.tmp/data/default/t1/65c28a92c7a709bf991ad267a3651fa9 empty. 2023-07-19 05:15:21,838 DEBUG [HFileArchiver-4] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:44175/user/jenkins/test-data/760309a7-bdb9-f7fa-7419-a01ae2b00b2f/.tmp/data/default/t1/65c28a92c7a709bf991ad267a3651fa9 2023-07-19 05:15:21,838 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(328): Archived t1 regions 2023-07-19 05:15:21,864 DEBUG [PEWorker-4] util.FSTableDescriptors(570): Wrote into hdfs://localhost:44175/user/jenkins/test-data/760309a7-bdb9-f7fa-7419-a01ae2b00b2f/.tmp/data/default/t1/.tabledesc/.tableinfo.0000000001 2023-07-19 05:15:21,865 INFO [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(7675): creating {ENCODED => 65c28a92c7a709bf991ad267a3651fa9, NAME => 't1,,1689743721827.65c28a92c7a709bf991ad267a3651fa9.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='t1', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'cf1', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:44175/user/jenkins/test-data/760309a7-bdb9-f7fa-7419-a01ae2b00b2f/.tmp 2023-07-19 05:15:21,911 DEBUG [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(866): Instantiated t1,,1689743721827.65c28a92c7a709bf991ad267a3651fa9.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-19 05:15:21,911 DEBUG [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(1604): Closing 65c28a92c7a709bf991ad267a3651fa9, disabling compactions & flushes 2023-07-19 05:15:21,911 INFO [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(1626): Closing region t1,,1689743721827.65c28a92c7a709bf991ad267a3651fa9. 2023-07-19 05:15:21,911 DEBUG [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on t1,,1689743721827.65c28a92c7a709bf991ad267a3651fa9. 2023-07-19 05:15:21,911 DEBUG [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(1714): Acquired close lock on t1,,1689743721827.65c28a92c7a709bf991ad267a3651fa9. after waiting 0 ms 2023-07-19 05:15:21,911 DEBUG [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(1724): Updates disabled for region t1,,1689743721827.65c28a92c7a709bf991ad267a3651fa9. 2023-07-19 05:15:21,912 INFO [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(1838): Closed t1,,1689743721827.65c28a92c7a709bf991ad267a3651fa9. 2023-07-19 05:15:21,912 DEBUG [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(1558): Region close journal for 65c28a92c7a709bf991ad267a3651fa9: 2023-07-19 05:15:21,915 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=t1 execute state=CREATE_TABLE_ADD_TO_META 2023-07-19 05:15:21,916 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"t1,,1689743721827.65c28a92c7a709bf991ad267a3651fa9.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689743721916"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689743721916"}]},"ts":"1689743721916"} 2023-07-19 05:15:21,918 INFO [PEWorker-4] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-19 05:15:21,918 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=t1 execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-19 05:15:21,919 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"t1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689743721919"}]},"ts":"1689743721919"} 2023-07-19 05:15:21,920 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=t1, state=ENABLING in hbase:meta 2023-07-19 05:15:21,925 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-19 05:15:21,925 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-19 05:15:21,925 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-19 05:15:21,925 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-19 05:15:21,925 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 3 is on host 0 2023-07-19 05:15:21,925 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-19 05:15:21,925 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=13, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=t1, region=65c28a92c7a709bf991ad267a3651fa9, ASSIGN}] 2023-07-19 05:15:21,932 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=13, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=t1, region=65c28a92c7a709bf991ad267a3651fa9, ASSIGN 2023-07-19 05:15:21,933 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39261] master.MasterRpcServices(1230): Checking to see if procedure is done pid=12 2023-07-19 05:15:21,934 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=13, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=t1, region=65c28a92c7a709bf991ad267a3651fa9, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,34765,1689743721451; forceNewPlan=false, retain=false 2023-07-19 05:15:22,084 INFO [jenkins-hbase4:39261] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-19 05:15:22,086 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=13 updating hbase:meta row=65c28a92c7a709bf991ad267a3651fa9, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,34765,1689743721451 2023-07-19 05:15:22,086 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"t1,,1689743721827.65c28a92c7a709bf991ad267a3651fa9.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689743722085"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689743722085"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689743722085"}]},"ts":"1689743722085"} 2023-07-19 05:15:22,087 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=14, ppid=13, state=RUNNABLE; OpenRegionProcedure 65c28a92c7a709bf991ad267a3651fa9, server=jenkins-hbase4.apache.org,34765,1689743721451}] 2023-07-19 05:15:22,134 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39261] master.MasterRpcServices(1230): Checking to see if procedure is done pid=12 2023-07-19 05:15:22,174 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-datanode.properties,hadoop-metrics2.properties 2023-07-19 05:15:22,240 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,34765,1689743721451 2023-07-19 05:15:22,240 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-19 05:15:22,241 INFO [RS-EventLoopGroup-16-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:40276, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-19 05:15:22,247 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open t1,,1689743721827.65c28a92c7a709bf991ad267a3651fa9. 2023-07-19 05:15:22,247 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 65c28a92c7a709bf991ad267a3651fa9, NAME => 't1,,1689743721827.65c28a92c7a709bf991ad267a3651fa9.', STARTKEY => '', ENDKEY => ''} 2023-07-19 05:15:22,249 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table t1 65c28a92c7a709bf991ad267a3651fa9 2023-07-19 05:15:22,249 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated t1,,1689743721827.65c28a92c7a709bf991ad267a3651fa9.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-19 05:15:22,249 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 65c28a92c7a709bf991ad267a3651fa9 2023-07-19 05:15:22,249 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 65c28a92c7a709bf991ad267a3651fa9 2023-07-19 05:15:22,251 INFO [StoreOpener-65c28a92c7a709bf991ad267a3651fa9-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family cf1 of region 65c28a92c7a709bf991ad267a3651fa9 2023-07-19 05:15:22,259 DEBUG [StoreOpener-65c28a92c7a709bf991ad267a3651fa9-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44175/user/jenkins/test-data/760309a7-bdb9-f7fa-7419-a01ae2b00b2f/data/default/t1/65c28a92c7a709bf991ad267a3651fa9/cf1 2023-07-19 05:15:22,259 DEBUG [StoreOpener-65c28a92c7a709bf991ad267a3651fa9-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44175/user/jenkins/test-data/760309a7-bdb9-f7fa-7419-a01ae2b00b2f/data/default/t1/65c28a92c7a709bf991ad267a3651fa9/cf1 2023-07-19 05:15:22,260 INFO [StoreOpener-65c28a92c7a709bf991ad267a3651fa9-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 65c28a92c7a709bf991ad267a3651fa9 columnFamilyName cf1 2023-07-19 05:15:22,261 INFO [StoreOpener-65c28a92c7a709bf991ad267a3651fa9-1] regionserver.HStore(310): Store=65c28a92c7a709bf991ad267a3651fa9/cf1, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-19 05:15:22,262 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44175/user/jenkins/test-data/760309a7-bdb9-f7fa-7419-a01ae2b00b2f/data/default/t1/65c28a92c7a709bf991ad267a3651fa9 2023-07-19 05:15:22,262 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44175/user/jenkins/test-data/760309a7-bdb9-f7fa-7419-a01ae2b00b2f/data/default/t1/65c28a92c7a709bf991ad267a3651fa9 2023-07-19 05:15:22,265 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 65c28a92c7a709bf991ad267a3651fa9 2023-07-19 05:15:22,269 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44175/user/jenkins/test-data/760309a7-bdb9-f7fa-7419-a01ae2b00b2f/data/default/t1/65c28a92c7a709bf991ad267a3651fa9/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-19 05:15:22,270 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 65c28a92c7a709bf991ad267a3651fa9; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11244909120, jitterRate=0.047263771295547485}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-19 05:15:22,270 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 65c28a92c7a709bf991ad267a3651fa9: 2023-07-19 05:15:22,272 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for t1,,1689743721827.65c28a92c7a709bf991ad267a3651fa9., pid=14, masterSystemTime=1689743722240 2023-07-19 05:15:22,278 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for t1,,1689743721827.65c28a92c7a709bf991ad267a3651fa9. 2023-07-19 05:15:22,279 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=13 updating hbase:meta row=65c28a92c7a709bf991ad267a3651fa9, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,34765,1689743721451 2023-07-19 05:15:22,279 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"t1,,1689743721827.65c28a92c7a709bf991ad267a3651fa9.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689743722279"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689743722279"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689743722279"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689743722279"}]},"ts":"1689743722279"} 2023-07-19 05:15:22,281 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened t1,,1689743721827.65c28a92c7a709bf991ad267a3651fa9. 2023-07-19 05:15:22,283 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=14, resume processing ppid=13 2023-07-19 05:15:22,283 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=14, ppid=13, state=SUCCESS; OpenRegionProcedure 65c28a92c7a709bf991ad267a3651fa9, server=jenkins-hbase4.apache.org,34765,1689743721451 in 193 msec 2023-07-19 05:15:22,286 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=13, resume processing ppid=12 2023-07-19 05:15:22,286 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=13, ppid=12, state=SUCCESS; TransitRegionStateProcedure table=t1, region=65c28a92c7a709bf991ad267a3651fa9, ASSIGN in 358 msec 2023-07-19 05:15:22,286 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=t1 execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-19 05:15:22,287 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"t1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689743722287"}]},"ts":"1689743722287"} 2023-07-19 05:15:22,288 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=t1, state=ENABLED in hbase:meta 2023-07-19 05:15:22,293 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=t1 execute state=CREATE_TABLE_POST_OPERATION 2023-07-19 05:15:22,299 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=12, state=SUCCESS; CreateTableProcedure table=t1 in 466 msec 2023-07-19 05:15:22,435 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39261] master.MasterRpcServices(1230): Checking to see if procedure is done pid=12 2023-07-19 05:15:22,435 INFO [Listener at localhost/42441] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:t1, procId: 12 completed 2023-07-19 05:15:22,436 DEBUG [Listener at localhost/42441] hbase.HBaseTestingUtility(3430): Waiting until all regions of table t1 get assigned. Timeout = 60000ms 2023-07-19 05:15:22,436 INFO [Listener at localhost/42441] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-19 05:15:22,438 INFO [Listener at localhost/42441] hbase.HBaseTestingUtility(3484): All regions for table t1 assigned to meta. Checking AM states. 2023-07-19 05:15:22,438 INFO [Listener at localhost/42441] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-19 05:15:22,438 INFO [Listener at localhost/42441] hbase.HBaseTestingUtility(3504): All regions for table t1 assigned. 2023-07-19 05:15:22,440 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39261] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 't1', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'cf1', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-19 05:15:22,441 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39261] procedure2.ProcedureExecutor(1029): Stored pid=15, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=t1 2023-07-19 05:15:22,443 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=15, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=t1 execute state=CREATE_TABLE_PRE_OPERATION 2023-07-19 05:15:22,443 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39261] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.TableExistsException: t1 at org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.prepareCreate(CreateTableProcedure.java:243) at org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.executeFromState(CreateTableProcedure.java:85) at org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.executeFromState(CreateTableProcedure.java:53) at org.apache.hadoop.hbase.procedure2.StateMachineProcedure.execute(StateMachineProcedure.java:188) at org.apache.hadoop.hbase.procedure2.Procedure.doExecute(Procedure.java:922) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execProcedure(ProcedureExecutor.java:1646) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.executeProcedure(ProcedureExecutor.java:1392) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.access$1100(ProcedureExecutor.java:73) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor$WorkerThread.run(ProcedureExecutor.java:1964) 2023-07-19 05:15:22,444 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39261] ipc.CallRunner(144): callId: 65 service: MasterService methodName: CreateTable size: 352 connection: 172.31.14.131:37146 deadline: 1689743782440, exception=org.apache.hadoop.hbase.TableExistsException: t1 2023-07-19 05:15:22,445 INFO [Listener at localhost/42441] hbase.Waiter(180): Waiting up to [5,000] milli-secs(wait.for.ratio=[1]) 2023-07-19 05:15:22,446 INFO [PEWorker-5] procedure2.ProcedureExecutor(1528): Rolled back pid=15, state=ROLLEDBACK, exception=org.apache.hadoop.hbase.TableExistsException via master-create-table:org.apache.hadoop.hbase.TableExistsException: t1; CreateTableProcedure table=t1 exec-time=6 msec 2023-07-19 05:15:22,546 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39261] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-19 05:15:22,547 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39261] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-19 05:15:22,547 INFO [Listener at localhost/42441] client.HBaseAdmin$15(890): Started disable of t1 2023-07-19 05:15:22,547 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39261] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable t1 2023-07-19 05:15:22,548 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39261] procedure2.ProcedureExecutor(1029): Stored pid=16, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=t1 2023-07-19 05:15:22,551 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39261] master.MasterRpcServices(1230): Checking to see if procedure is done pid=16 2023-07-19 05:15:22,551 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"t1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689743722551"}]},"ts":"1689743722551"} 2023-07-19 05:15:22,552 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=t1, state=DISABLING in hbase:meta 2023-07-19 05:15:22,554 INFO [PEWorker-3] procedure.DisableTableProcedure(293): Set t1 to state=DISABLING 2023-07-19 05:15:22,554 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=17, ppid=16, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=t1, region=65c28a92c7a709bf991ad267a3651fa9, UNASSIGN}] 2023-07-19 05:15:22,555 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=17, ppid=16, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=t1, region=65c28a92c7a709bf991ad267a3651fa9, UNASSIGN 2023-07-19 05:15:22,556 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=17 updating hbase:meta row=65c28a92c7a709bf991ad267a3651fa9, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,34765,1689743721451 2023-07-19 05:15:22,556 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"t1,,1689743721827.65c28a92c7a709bf991ad267a3651fa9.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689743722555"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689743722555"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689743722555"}]},"ts":"1689743722555"} 2023-07-19 05:15:22,557 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=18, ppid=17, state=RUNNABLE; CloseRegionProcedure 65c28a92c7a709bf991ad267a3651fa9, server=jenkins-hbase4.apache.org,34765,1689743721451}] 2023-07-19 05:15:22,652 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39261] master.MasterRpcServices(1230): Checking to see if procedure is done pid=16 2023-07-19 05:15:22,709 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 65c28a92c7a709bf991ad267a3651fa9 2023-07-19 05:15:22,709 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 65c28a92c7a709bf991ad267a3651fa9, disabling compactions & flushes 2023-07-19 05:15:22,709 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region t1,,1689743721827.65c28a92c7a709bf991ad267a3651fa9. 2023-07-19 05:15:22,709 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on t1,,1689743721827.65c28a92c7a709bf991ad267a3651fa9. 2023-07-19 05:15:22,709 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on t1,,1689743721827.65c28a92c7a709bf991ad267a3651fa9. after waiting 0 ms 2023-07-19 05:15:22,709 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region t1,,1689743721827.65c28a92c7a709bf991ad267a3651fa9. 2023-07-19 05:15:22,713 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44175/user/jenkins/test-data/760309a7-bdb9-f7fa-7419-a01ae2b00b2f/data/default/t1/65c28a92c7a709bf991ad267a3651fa9/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-19 05:15:22,713 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed t1,,1689743721827.65c28a92c7a709bf991ad267a3651fa9. 2023-07-19 05:15:22,713 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 65c28a92c7a709bf991ad267a3651fa9: 2023-07-19 05:15:22,715 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 65c28a92c7a709bf991ad267a3651fa9 2023-07-19 05:15:22,715 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=17 updating hbase:meta row=65c28a92c7a709bf991ad267a3651fa9, regionState=CLOSED 2023-07-19 05:15:22,715 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"t1,,1689743721827.65c28a92c7a709bf991ad267a3651fa9.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689743722715"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689743722715"}]},"ts":"1689743722715"} 2023-07-19 05:15:22,717 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=18, resume processing ppid=17 2023-07-19 05:15:22,717 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=18, ppid=17, state=SUCCESS; CloseRegionProcedure 65c28a92c7a709bf991ad267a3651fa9, server=jenkins-hbase4.apache.org,34765,1689743721451 in 159 msec 2023-07-19 05:15:22,720 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=17, resume processing ppid=16 2023-07-19 05:15:22,720 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=17, ppid=16, state=SUCCESS; TransitRegionStateProcedure table=t1, region=65c28a92c7a709bf991ad267a3651fa9, UNASSIGN in 163 msec 2023-07-19 05:15:22,720 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"t1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689743722720"}]},"ts":"1689743722720"} 2023-07-19 05:15:22,721 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=t1, state=DISABLED in hbase:meta 2023-07-19 05:15:22,724 INFO [PEWorker-3] procedure.DisableTableProcedure(305): Set t1 to state=DISABLED 2023-07-19 05:15:22,725 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=16, state=SUCCESS; DisableTableProcedure table=t1 in 176 msec 2023-07-19 05:15:22,853 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39261] master.MasterRpcServices(1230): Checking to see if procedure is done pid=16 2023-07-19 05:15:22,853 INFO [Listener at localhost/42441] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:t1, procId: 16 completed 2023-07-19 05:15:22,854 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39261] master.HMaster$5(2228): Client=jenkins//172.31.14.131 delete t1 2023-07-19 05:15:22,855 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39261] procedure2.ProcedureExecutor(1029): Stored pid=19, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=t1 2023-07-19 05:15:22,857 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=19, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=t1 2023-07-19 05:15:22,857 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39261] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 't1' from rsgroup 'default' 2023-07-19 05:15:22,858 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=19, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=t1 2023-07-19 05:15:22,859 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39261] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 05:15:22,859 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39261] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-19 05:15:22,859 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39261] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-19 05:15:22,861 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:44175/user/jenkins/test-data/760309a7-bdb9-f7fa-7419-a01ae2b00b2f/.tmp/data/default/t1/65c28a92c7a709bf991ad267a3651fa9 2023-07-19 05:15:22,861 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39261] master.MasterRpcServices(1230): Checking to see if procedure is done pid=19 2023-07-19 05:15:22,862 DEBUG [HFileArchiver-3] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:44175/user/jenkins/test-data/760309a7-bdb9-f7fa-7419-a01ae2b00b2f/.tmp/data/default/t1/65c28a92c7a709bf991ad267a3651fa9/cf1, FileablePath, hdfs://localhost:44175/user/jenkins/test-data/760309a7-bdb9-f7fa-7419-a01ae2b00b2f/.tmp/data/default/t1/65c28a92c7a709bf991ad267a3651fa9/recovered.edits] 2023-07-19 05:15:22,867 DEBUG [HFileArchiver-3] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:44175/user/jenkins/test-data/760309a7-bdb9-f7fa-7419-a01ae2b00b2f/.tmp/data/default/t1/65c28a92c7a709bf991ad267a3651fa9/recovered.edits/4.seqid to hdfs://localhost:44175/user/jenkins/test-data/760309a7-bdb9-f7fa-7419-a01ae2b00b2f/archive/data/default/t1/65c28a92c7a709bf991ad267a3651fa9/recovered.edits/4.seqid 2023-07-19 05:15:22,868 DEBUG [HFileArchiver-3] backup.HFileArchiver(596): Deleted hdfs://localhost:44175/user/jenkins/test-data/760309a7-bdb9-f7fa-7419-a01ae2b00b2f/.tmp/data/default/t1/65c28a92c7a709bf991ad267a3651fa9 2023-07-19 05:15:22,868 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(328): Archived t1 regions 2023-07-19 05:15:22,870 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=19, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=t1 2023-07-19 05:15:22,871 WARN [PEWorker-4] procedure.DeleteTableProcedure(384): Deleting some vestigial 1 rows of t1 from hbase:meta 2023-07-19 05:15:22,873 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(421): Removing 't1' descriptor. 2023-07-19 05:15:22,873 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=19, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=t1 2023-07-19 05:15:22,874 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(411): Removing 't1' from region states. 2023-07-19 05:15:22,874 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"t1,,1689743721827.65c28a92c7a709bf991ad267a3651fa9.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689743722874"}]},"ts":"9223372036854775807"} 2023-07-19 05:15:22,875 INFO [PEWorker-4] hbase.MetaTableAccessor(1788): Deleted 1 regions from META 2023-07-19 05:15:22,875 DEBUG [PEWorker-4] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => 65c28a92c7a709bf991ad267a3651fa9, NAME => 't1,,1689743721827.65c28a92c7a709bf991ad267a3651fa9.', STARTKEY => '', ENDKEY => ''}] 2023-07-19 05:15:22,875 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(415): Marking 't1' as deleted. 2023-07-19 05:15:22,875 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"t1","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1689743722875"}]},"ts":"9223372036854775807"} 2023-07-19 05:15:22,876 INFO [PEWorker-4] hbase.MetaTableAccessor(1658): Deleted table t1 state from META 2023-07-19 05:15:22,878 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(130): Finished pid=19, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=t1 2023-07-19 05:15:22,879 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=19, state=SUCCESS; DeleteTableProcedure table=t1 in 24 msec 2023-07-19 05:15:22,962 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39261] master.MasterRpcServices(1230): Checking to see if procedure is done pid=19 2023-07-19 05:15:22,963 INFO [Listener at localhost/42441] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: default:t1, procId: 19 completed 2023-07-19 05:15:22,966 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39261] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-19 05:15:22,966 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39261] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-19 05:15:22,967 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39261] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-19 05:15:22,967 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39261] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-19 05:15:22,967 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39261] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-19 05:15:22,967 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39261] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-19 05:15:22,968 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39261] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-19 05:15:22,968 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39261] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-19 05:15:22,971 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39261] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 05:15:22,971 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39261] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-19 05:15:22,976 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39261] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-19 05:15:22,979 INFO [Listener at localhost/42441] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-19 05:15:22,979 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39261] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-19 05:15:22,981 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39261] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 05:15:22,981 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39261] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-19 05:15:22,983 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39261] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-19 05:15:22,984 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39261] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-19 05:15:22,986 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39261] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-19 05:15:22,986 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39261] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-19 05:15:22,988 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39261] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:39261] to rsgroup master 2023-07-19 05:15:22,989 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39261] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:39261 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-19 05:15:22,989 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39261] ipc.CallRunner(144): callId: 105 service: MasterService methodName: ExecMasterService size: 118 connection: 172.31.14.131:37146 deadline: 1689744922988, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:39261 is either offline or it does not exist. 2023-07-19 05:15:22,989 WARN [Listener at localhost/42441] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:39261 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:39261 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-19 05:15:22,992 INFO [Listener at localhost/42441] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-19 05:15:22,993 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39261] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-19 05:15:22,993 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39261] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-19 05:15:22,994 INFO [Listener at localhost/42441] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:34765, jenkins-hbase4.apache.org:35969, jenkins-hbase4.apache.org:41955, jenkins-hbase4.apache.org:46065], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-19 05:15:22,994 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39261] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-19 05:15:22,994 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39261] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-19 05:15:23,013 INFO [Listener at localhost/42441] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testNotMoveTableToNullRSGroupWhenCreatingExistingTable Thread=574 (was 559) - Thread LEAK? -, OpenFileDescriptor=841 (was 826) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=397 (was 397), ProcessCount=171 (was 171), AvailableMemoryMB=4863 (was 4900) 2023-07-19 05:15:23,013 WARN [Listener at localhost/42441] hbase.ResourceChecker(130): Thread=574 is superior to 500 2023-07-19 05:15:23,030 INFO [Listener at localhost/42441] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testNonExistentTableMove Thread=574, OpenFileDescriptor=841, MaxFileDescriptor=60000, SystemLoadAverage=397, ProcessCount=171, AvailableMemoryMB=4862 2023-07-19 05:15:23,031 WARN [Listener at localhost/42441] hbase.ResourceChecker(130): Thread=574 is superior to 500 2023-07-19 05:15:23,031 INFO [Listener at localhost/42441] rsgroup.TestRSGroupsBase(132): testNonExistentTableMove 2023-07-19 05:15:23,035 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39261] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-19 05:15:23,035 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39261] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-19 05:15:23,036 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39261] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-19 05:15:23,036 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39261] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-19 05:15:23,036 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39261] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-19 05:15:23,037 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39261] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-19 05:15:23,037 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39261] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-19 05:15:23,037 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39261] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-19 05:15:23,040 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39261] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 05:15:23,041 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39261] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-19 05:15:23,043 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39261] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-19 05:15:23,046 INFO [Listener at localhost/42441] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-19 05:15:23,047 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39261] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-19 05:15:23,049 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39261] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 05:15:23,049 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39261] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-19 05:15:23,051 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39261] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-19 05:15:23,052 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39261] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-19 05:15:23,055 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39261] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-19 05:15:23,055 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39261] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-19 05:15:23,057 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39261] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:39261] to rsgroup master 2023-07-19 05:15:23,057 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39261] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:39261 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-19 05:15:23,057 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39261] ipc.CallRunner(144): callId: 133 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:37146 deadline: 1689744923057, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:39261 is either offline or it does not exist. 2023-07-19 05:15:23,057 WARN [Listener at localhost/42441] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:39261 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:39261 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-19 05:15:23,059 INFO [Listener at localhost/42441] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-19 05:15:23,060 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39261] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-19 05:15:23,060 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39261] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-19 05:15:23,060 INFO [Listener at localhost/42441] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:34765, jenkins-hbase4.apache.org:35969, jenkins-hbase4.apache.org:41955, jenkins-hbase4.apache.org:46065], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-19 05:15:23,061 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39261] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-19 05:15:23,061 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39261] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-19 05:15:23,062 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39261] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=GrouptestNonExistentTableMove 2023-07-19 05:15:23,062 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39261] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-19 05:15:23,063 INFO [Listener at localhost/42441] rsgroup.TestRSGroupsAdmin1(389): Moving table GrouptestNonExistentTableMove to default 2023-07-19 05:15:23,068 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39261] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=GrouptestNonExistentTableMove 2023-07-19 05:15:23,068 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39261] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-19 05:15:23,072 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39261] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-19 05:15:23,072 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39261] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-19 05:15:23,073 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39261] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-19 05:15:23,073 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39261] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-19 05:15:23,073 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39261] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-19 05:15:23,073 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39261] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-19 05:15:23,073 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39261] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-19 05:15:23,074 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39261] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-19 05:15:23,077 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39261] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 05:15:23,077 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39261] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-19 05:15:23,079 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39261] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-19 05:15:23,082 INFO [Listener at localhost/42441] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-19 05:15:23,082 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39261] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-19 05:15:23,084 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39261] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 05:15:23,085 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39261] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-19 05:15:23,087 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39261] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-19 05:15:23,088 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39261] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-19 05:15:23,091 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39261] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-19 05:15:23,091 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39261] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-19 05:15:23,093 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39261] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:39261] to rsgroup master 2023-07-19 05:15:23,093 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39261] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:39261 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-19 05:15:23,093 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39261] ipc.CallRunner(144): callId: 168 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:37146 deadline: 1689744923093, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:39261 is either offline or it does not exist. 2023-07-19 05:15:23,093 WARN [Listener at localhost/42441] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:39261 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:39261 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-19 05:15:23,096 INFO [Listener at localhost/42441] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-19 05:15:23,096 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39261] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-19 05:15:23,096 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39261] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-19 05:15:23,097 INFO [Listener at localhost/42441] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:34765, jenkins-hbase4.apache.org:35969, jenkins-hbase4.apache.org:41955, jenkins-hbase4.apache.org:46065], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-19 05:15:23,097 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39261] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-19 05:15:23,097 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39261] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-19 05:15:23,124 INFO [Listener at localhost/42441] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testNonExistentTableMove Thread=576 (was 574) - Thread LEAK? -, OpenFileDescriptor=841 (was 841), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=397 (was 397), ProcessCount=171 (was 171), AvailableMemoryMB=4857 (was 4862) 2023-07-19 05:15:23,124 WARN [Listener at localhost/42441] hbase.ResourceChecker(130): Thread=576 is superior to 500 2023-07-19 05:15:23,149 INFO [Listener at localhost/42441] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testGroupInfoMultiAccessing Thread=576, OpenFileDescriptor=841, MaxFileDescriptor=60000, SystemLoadAverage=397, ProcessCount=171, AvailableMemoryMB=4856 2023-07-19 05:15:23,150 WARN [Listener at localhost/42441] hbase.ResourceChecker(130): Thread=576 is superior to 500 2023-07-19 05:15:23,150 INFO [Listener at localhost/42441] rsgroup.TestRSGroupsBase(132): testGroupInfoMultiAccessing 2023-07-19 05:15:23,154 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39261] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-19 05:15:23,154 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39261] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-19 05:15:23,156 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39261] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-19 05:15:23,156 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39261] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-19 05:15:23,157 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39261] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-19 05:15:23,158 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39261] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-19 05:15:23,158 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39261] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-19 05:15:23,158 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39261] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-19 05:15:23,162 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39261] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 05:15:23,162 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39261] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-19 05:15:23,164 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39261] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-19 05:15:23,167 INFO [Listener at localhost/42441] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-19 05:15:23,168 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39261] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-19 05:15:23,170 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39261] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 05:15:23,170 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39261] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-19 05:15:23,171 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39261] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-19 05:15:23,174 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39261] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-19 05:15:23,177 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39261] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-19 05:15:23,177 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39261] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-19 05:15:23,179 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39261] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:39261] to rsgroup master 2023-07-19 05:15:23,179 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39261] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:39261 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-19 05:15:23,179 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39261] ipc.CallRunner(144): callId: 196 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:37146 deadline: 1689744923179, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:39261 is either offline or it does not exist. 2023-07-19 05:15:23,179 WARN [Listener at localhost/42441] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:39261 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:39261 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-19 05:15:23,181 INFO [Listener at localhost/42441] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-19 05:15:23,182 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39261] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-19 05:15:23,182 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39261] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-19 05:15:23,182 INFO [Listener at localhost/42441] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:34765, jenkins-hbase4.apache.org:35969, jenkins-hbase4.apache.org:41955, jenkins-hbase4.apache.org:46065], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-19 05:15:23,183 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39261] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-19 05:15:23,183 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39261] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-19 05:15:23,186 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39261] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-19 05:15:23,186 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39261] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-19 05:15:23,187 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39261] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-19 05:15:23,187 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39261] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-19 05:15:23,187 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39261] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-19 05:15:23,188 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39261] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-19 05:15:23,188 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39261] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-19 05:15:23,189 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39261] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-19 05:15:23,192 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39261] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 05:15:23,193 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39261] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-19 05:15:23,197 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39261] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-19 05:15:23,200 INFO [Listener at localhost/42441] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-19 05:15:23,201 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39261] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-19 05:15:23,203 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39261] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 05:15:23,203 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39261] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-19 05:15:23,204 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39261] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-19 05:15:23,205 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39261] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-19 05:15:23,207 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39261] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-19 05:15:23,208 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39261] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-19 05:15:23,209 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39261] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:39261] to rsgroup master 2023-07-19 05:15:23,210 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39261] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:39261 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-19 05:15:23,210 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39261] ipc.CallRunner(144): callId: 224 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:37146 deadline: 1689744923209, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:39261 is either offline or it does not exist. 2023-07-19 05:15:23,210 WARN [Listener at localhost/42441] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:39261 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:39261 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-19 05:15:23,212 INFO [Listener at localhost/42441] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-19 05:15:23,212 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39261] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-19 05:15:23,212 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39261] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-19 05:15:23,213 INFO [Listener at localhost/42441] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:34765, jenkins-hbase4.apache.org:35969, jenkins-hbase4.apache.org:41955, jenkins-hbase4.apache.org:46065], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-19 05:15:23,213 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39261] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-19 05:15:23,214 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39261] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-19 05:15:23,231 INFO [Listener at localhost/42441] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testGroupInfoMultiAccessing Thread=577 (was 576) - Thread LEAK? -, OpenFileDescriptor=841 (was 841), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=397 (was 397), ProcessCount=171 (was 171), AvailableMemoryMB=4854 (was 4856) 2023-07-19 05:15:23,231 WARN [Listener at localhost/42441] hbase.ResourceChecker(130): Thread=577 is superior to 500 2023-07-19 05:15:23,248 INFO [Listener at localhost/42441] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testNamespaceConstraint Thread=577, OpenFileDescriptor=841, MaxFileDescriptor=60000, SystemLoadAverage=397, ProcessCount=171, AvailableMemoryMB=4854 2023-07-19 05:15:23,248 WARN [Listener at localhost/42441] hbase.ResourceChecker(130): Thread=577 is superior to 500 2023-07-19 05:15:23,248 INFO [Listener at localhost/42441] rsgroup.TestRSGroupsBase(132): testNamespaceConstraint 2023-07-19 05:15:23,252 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39261] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-19 05:15:23,252 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39261] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-19 05:15:23,253 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39261] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-19 05:15:23,253 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39261] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-19 05:15:23,253 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39261] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-19 05:15:23,253 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39261] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-19 05:15:23,254 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39261] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-19 05:15:23,254 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39261] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-19 05:15:23,257 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39261] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 05:15:23,257 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39261] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-19 05:15:23,260 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39261] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-19 05:15:23,262 INFO [Listener at localhost/42441] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-19 05:15:23,263 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39261] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-19 05:15:23,264 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39261] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 05:15:23,265 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39261] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-19 05:15:23,266 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39261] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-19 05:15:23,267 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39261] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-19 05:15:23,269 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39261] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-19 05:15:23,269 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39261] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-19 05:15:23,271 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39261] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:39261] to rsgroup master 2023-07-19 05:15:23,271 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39261] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:39261 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-19 05:15:23,271 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39261] ipc.CallRunner(144): callId: 252 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:37146 deadline: 1689744923270, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:39261 is either offline or it does not exist. 2023-07-19 05:15:23,271 WARN [Listener at localhost/42441] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:39261 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:39261 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-19 05:15:23,273 INFO [Listener at localhost/42441] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-19 05:15:23,273 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39261] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-19 05:15:23,273 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39261] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-19 05:15:23,274 INFO [Listener at localhost/42441] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:34765, jenkins-hbase4.apache.org:35969, jenkins-hbase4.apache.org:41955, jenkins-hbase4.apache.org:46065], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-19 05:15:23,274 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39261] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-19 05:15:23,274 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39261] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-19 05:15:23,275 INFO [Listener at localhost/42441] rsgroup.TestRSGroupsAdmin1(154): testNamespaceConstraint 2023-07-19 05:15:23,275 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39261] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup Group_foo 2023-07-19 05:15:23,277 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39261] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_foo 2023-07-19 05:15:23,278 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39261] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 05:15:23,278 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39261] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-19 05:15:23,279 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39261] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-19 05:15:23,281 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39261] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-19 05:15:23,283 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39261] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-19 05:15:23,283 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39261] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-19 05:15:23,285 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39261] master.HMaster$15(3014): Client=jenkins//172.31.14.131 creating {NAME => 'Group_foo', hbase.rsgroup.name => 'Group_foo'} 2023-07-19 05:15:23,286 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39261] procedure2.ProcedureExecutor(1029): Stored pid=20, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=Group_foo 2023-07-19 05:15:23,289 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39261] master.MasterRpcServices(1230): Checking to see if procedure is done pid=20 2023-07-19 05:15:23,292 DEBUG [Listener at localhost/42441-EventThread] zookeeper.ZKWatcher(600): master:39261-0x1017c017df10000, quorum=127.0.0.1:51693, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-19 05:15:23,295 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=20, state=SUCCESS; CreateNamespaceProcedure, namespace=Group_foo in 9 msec 2023-07-19 05:15:23,390 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39261] master.MasterRpcServices(1230): Checking to see if procedure is done pid=20 2023-07-19 05:15:23,391 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39261] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup Group_foo 2023-07-19 05:15:23,393 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39261] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup Group_foo is referenced by namespace: Group_foo at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.removeRSGroup(RSGroupAdminServer.java:504) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.removeRSGroup(RSGroupAdminEndpoint.java:278) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16208) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-19 05:15:23,393 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39261] ipc.CallRunner(144): callId: 268 service: MasterService methodName: ExecMasterService size: 91 connection: 172.31.14.131:37146 deadline: 1689744923391, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup Group_foo is referenced by namespace: Group_foo 2023-07-19 05:15:23,398 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39261] master.HMaster$16(3053): Client=jenkins//172.31.14.131 modify {NAME => 'Group_foo', hbase.rsgroup.name => 'Group_foo'} 2023-07-19 05:15:23,404 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39261] procedure2.ProcedureExecutor(1029): Stored pid=21, state=RUNNABLE:MODIFY_NAMESPACE_PREPARE; ModifyNamespaceProcedure, namespace=Group_foo 2023-07-19 05:15:23,410 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39261] master.MasterRpcServices(1230): Checking to see if procedure is done pid=21 2023-07-19 05:15:23,417 DEBUG [Listener at localhost/42441-EventThread] zookeeper.ZKWatcher(600): master:39261-0x1017c017df10000, quorum=127.0.0.1:51693, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/Group_foo 2023-07-19 05:15:23,418 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=21, state=SUCCESS; ModifyNamespaceProcedure, namespace=Group_foo in 19 msec 2023-07-19 05:15:23,511 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39261] master.MasterRpcServices(1230): Checking to see if procedure is done pid=21 2023-07-19 05:15:23,512 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39261] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup Group_anotherGroup 2023-07-19 05:15:23,514 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39261] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_anotherGroup 2023-07-19 05:15:23,516 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39261] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 05:15:23,516 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39261] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_foo 2023-07-19 05:15:23,516 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39261] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-19 05:15:23,517 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39261] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-19 05:15:23,521 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39261] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-19 05:15:23,523 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39261] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-19 05:15:23,523 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39261] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-19 05:15:23,525 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39261] master.HMaster$17(3086): Client=jenkins//172.31.14.131 delete Group_foo 2023-07-19 05:15:23,526 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39261] procedure2.ProcedureExecutor(1029): Stored pid=22, state=RUNNABLE:DELETE_NAMESPACE_PREPARE; DeleteNamespaceProcedure, namespace=Group_foo 2023-07-19 05:15:23,527 INFO [PEWorker-5] procedure.DeleteNamespaceProcedure(73): pid=22, state=RUNNABLE:DELETE_NAMESPACE_PREPARE, locked=true; DeleteNamespaceProcedure, namespace=Group_foo 2023-07-19 05:15:23,530 INFO [PEWorker-5] procedure.DeleteNamespaceProcedure(73): pid=22, state=RUNNABLE:DELETE_NAMESPACE_DELETE_FROM_NS_TABLE, locked=true; DeleteNamespaceProcedure, namespace=Group_foo 2023-07-19 05:15:23,530 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39261] master.MasterRpcServices(1230): Checking to see if procedure is done pid=22 2023-07-19 05:15:23,531 INFO [PEWorker-5] procedure.DeleteNamespaceProcedure(73): pid=22, state=RUNNABLE:DELETE_NAMESPACE_REMOVE_FROM_ZK, locked=true; DeleteNamespaceProcedure, namespace=Group_foo 2023-07-19 05:15:23,532 DEBUG [Listener at localhost/42441-EventThread] zookeeper.ZKWatcher(600): master:39261-0x1017c017df10000, quorum=127.0.0.1:51693, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/namespace/Group_foo 2023-07-19 05:15:23,532 DEBUG [Listener at localhost/42441-EventThread] zookeeper.ZKWatcher(600): master:39261-0x1017c017df10000, quorum=127.0.0.1:51693, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-19 05:15:23,533 INFO [PEWorker-5] procedure.DeleteNamespaceProcedure(73): pid=22, state=RUNNABLE:DELETE_NAMESPACE_DELETE_DIRECTORIES, locked=true; DeleteNamespaceProcedure, namespace=Group_foo 2023-07-19 05:15:23,534 INFO [PEWorker-5] procedure.DeleteNamespaceProcedure(73): pid=22, state=RUNNABLE:DELETE_NAMESPACE_REMOVE_NAMESPACE_QUOTA, locked=true; DeleteNamespaceProcedure, namespace=Group_foo 2023-07-19 05:15:23,535 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=22, state=SUCCESS; DeleteNamespaceProcedure, namespace=Group_foo in 9 msec 2023-07-19 05:15:23,631 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39261] master.MasterRpcServices(1230): Checking to see if procedure is done pid=22 2023-07-19 05:15:23,632 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39261] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup Group_foo 2023-07-19 05:15:23,635 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39261] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_anotherGroup 2023-07-19 05:15:23,635 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39261] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 05:15:23,635 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39261] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-19 05:15:23,636 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39261] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 7 2023-07-19 05:15:23,637 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39261] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-19 05:15:23,640 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39261] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-19 05:15:23,640 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39261] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-19 05:15:23,642 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39261] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Region server group foo does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint.preCreateNamespace(RSGroupAdminEndpoint.java:591) at org.apache.hadoop.hbase.master.MasterCoprocessorHost$1.call(MasterCoprocessorHost.java:222) at org.apache.hadoop.hbase.master.MasterCoprocessorHost$1.call(MasterCoprocessorHost.java:219) at org.apache.hadoop.hbase.coprocessor.CoprocessorHost$ObserverOperationWithoutResult.callObserver(CoprocessorHost.java:558) at org.apache.hadoop.hbase.coprocessor.CoprocessorHost.execOperation(CoprocessorHost.java:631) at org.apache.hadoop.hbase.master.MasterCoprocessorHost.preCreateNamespace(MasterCoprocessorHost.java:219) at org.apache.hadoop.hbase.master.HMaster$15.run(HMaster.java:3010) at org.apache.hadoop.hbase.master.procedure.MasterProcedureUtil.submitProcedure(MasterProcedureUtil.java:132) at org.apache.hadoop.hbase.master.HMaster.createNamespace(HMaster.java:3007) at org.apache.hadoop.hbase.master.MasterRpcServices.createNamespace(MasterRpcServices.java:684) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-19 05:15:23,642 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39261] ipc.CallRunner(144): callId: 290 service: MasterService methodName: CreateNamespace size: 70 connection: 172.31.14.131:37146 deadline: 1689743783642, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Region server group foo does not exist. 2023-07-19 05:15:23,645 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39261] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-19 05:15:23,646 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39261] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-19 05:15:23,646 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39261] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-19 05:15:23,646 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39261] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-19 05:15:23,646 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39261] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-19 05:15:23,647 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39261] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-19 05:15:23,647 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39261] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-19 05:15:23,648 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39261] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup Group_anotherGroup 2023-07-19 05:15:23,651 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39261] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 05:15:23,651 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39261] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-19 05:15:23,651 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39261] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-19 05:15:23,653 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39261] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-19 05:15:23,654 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39261] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-19 05:15:23,654 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39261] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-19 05:15:23,654 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39261] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-19 05:15:23,655 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39261] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-19 05:15:23,655 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39261] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-19 05:15:23,655 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39261] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-19 05:15:23,658 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39261] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 05:15:23,658 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39261] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-19 05:15:23,659 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39261] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-19 05:15:23,661 INFO [Listener at localhost/42441] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-19 05:15:23,662 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39261] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-19 05:15:23,664 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39261] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 05:15:23,664 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39261] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-19 05:15:23,665 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39261] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-19 05:15:23,667 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39261] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-19 05:15:23,668 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39261] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-19 05:15:23,668 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39261] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-19 05:15:23,670 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39261] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:39261] to rsgroup master 2023-07-19 05:15:23,670 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39261] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:39261 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-19 05:15:23,670 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39261] ipc.CallRunner(144): callId: 320 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:37146 deadline: 1689744923670, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:39261 is either offline or it does not exist. 2023-07-19 05:15:23,671 WARN [Listener at localhost/42441] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:39261 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:39261 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-19 05:15:23,673 INFO [Listener at localhost/42441] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-19 05:15:23,673 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39261] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-19 05:15:23,673 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39261] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-19 05:15:23,674 INFO [Listener at localhost/42441] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:34765, jenkins-hbase4.apache.org:35969, jenkins-hbase4.apache.org:41955, jenkins-hbase4.apache.org:46065], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-19 05:15:23,674 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39261] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-19 05:15:23,674 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39261] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-19 05:15:23,692 INFO [Listener at localhost/42441] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testNamespaceConstraint Thread=577 (was 577), OpenFileDescriptor=841 (was 841), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=397 (was 397), ProcessCount=171 (was 171), AvailableMemoryMB=4853 (was 4854) 2023-07-19 05:15:23,693 WARN [Listener at localhost/42441] hbase.ResourceChecker(130): Thread=577 is superior to 500 2023-07-19 05:15:23,693 INFO [Listener at localhost/42441] hbase.HBaseTestingUtility(1286): Shutting down minicluster 2023-07-19 05:15:23,693 INFO [Listener at localhost/42441] client.ConnectionImplementation(1979): Closing master protocol: MasterService 2023-07-19 05:15:23,693 DEBUG [Listener at localhost/42441] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x788676c3 to 127.0.0.1:51693 2023-07-19 05:15:23,693 DEBUG [Listener at localhost/42441] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-19 05:15:23,693 DEBUG [Listener at localhost/42441] util.JVMClusterUtil(237): Shutting down HBase Cluster 2023-07-19 05:15:23,693 DEBUG [Listener at localhost/42441] util.JVMClusterUtil(257): Found active master hash=746450421, stopped=false 2023-07-19 05:15:23,693 DEBUG [Listener at localhost/42441] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint 2023-07-19 05:15:23,693 DEBUG [Listener at localhost/42441] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver 2023-07-19 05:15:23,693 INFO [Listener at localhost/42441] master.ServerManager(901): Cluster shutdown requested of master=jenkins-hbase4.apache.org,39261,1689743719572 2023-07-19 05:15:23,697 DEBUG [Listener at localhost/42441-EventThread] zookeeper.ZKWatcher(600): regionserver:46065-0x1017c017df10001, quorum=127.0.0.1:51693, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-19 05:15:23,697 DEBUG [Listener at localhost/42441-EventThread] zookeeper.ZKWatcher(600): regionserver:41955-0x1017c017df10002, quorum=127.0.0.1:51693, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-19 05:15:23,697 INFO [Listener at localhost/42441] procedure2.ProcedureExecutor(629): Stopping 2023-07-19 05:15:23,697 DEBUG [Listener at localhost/42441-EventThread] zookeeper.ZKWatcher(600): master:39261-0x1017c017df10000, quorum=127.0.0.1:51693, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-19 05:15:23,697 DEBUG [Listener at localhost/42441-EventThread] zookeeper.ZKWatcher(600): regionserver:35969-0x1017c017df10003, quorum=127.0.0.1:51693, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-19 05:15:23,697 DEBUG [Listener at localhost/42441-EventThread] zookeeper.ZKWatcher(600): regionserver:34765-0x1017c017df1000b, quorum=127.0.0.1:51693, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-19 05:15:23,697 DEBUG [Listener at localhost/42441-EventThread] zookeeper.ZKWatcher(600): master:39261-0x1017c017df10000, quorum=127.0.0.1:51693, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-19 05:15:23,697 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:46065-0x1017c017df10001, quorum=127.0.0.1:51693, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-19 05:15:23,697 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:35969-0x1017c017df10003, quorum=127.0.0.1:51693, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-19 05:15:23,697 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:41955-0x1017c017df10002, quorum=127.0.0.1:51693, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-19 05:15:23,697 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:39261-0x1017c017df10000, quorum=127.0.0.1:51693, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-19 05:15:23,698 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:34765-0x1017c017df1000b, quorum=127.0.0.1:51693, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-19 05:15:23,698 DEBUG [Listener at localhost/42441] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x0ba13049 to 127.0.0.1:51693 2023-07-19 05:15:23,698 DEBUG [Listener at localhost/42441] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-19 05:15:23,698 INFO [Listener at localhost/42441] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,46065,1689743719751' ***** 2023-07-19 05:15:23,698 INFO [Listener at localhost/42441] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-19 05:15:23,698 INFO [Listener at localhost/42441] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,41955,1689743719898' ***** 2023-07-19 05:15:23,698 INFO [Listener at localhost/42441] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-19 05:15:23,698 INFO [Listener at localhost/42441] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,35969,1689743720058' ***** 2023-07-19 05:15:23,698 INFO [Listener at localhost/42441] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-19 05:15:23,698 INFO [Listener at localhost/42441] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,34765,1689743721451' ***** 2023-07-19 05:15:23,698 INFO [RS:0;jenkins-hbase4:46065] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-19 05:15:23,698 INFO [Listener at localhost/42441] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-19 05:15:23,698 INFO [RS:2;jenkins-hbase4:35969] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-19 05:15:23,698 INFO [RS:1;jenkins-hbase4:41955] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-19 05:15:23,699 INFO [RS:3;jenkins-hbase4:34765] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-19 05:15:23,706 INFO [RS:3;jenkins-hbase4:34765] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@33342480{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-19 05:15:23,706 INFO [RS:1;jenkins-hbase4:41955] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@53594c28{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-19 05:15:23,706 INFO [RS:0;jenkins-hbase4:46065] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@22b53ac5{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-19 05:15:23,706 INFO [RS:2;jenkins-hbase4:35969] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@6bdf1afe{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-19 05:15:23,707 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-19 05:15:23,707 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-19 05:15:23,707 INFO [RS:1;jenkins-hbase4:41955] server.AbstractConnector(383): Stopped ServerConnector@802ef93{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-19 05:15:23,707 INFO [RS:3;jenkins-hbase4:34765] server.AbstractConnector(383): Stopped ServerConnector@4a3ed2e7{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-19 05:15:23,707 INFO [RS:1;jenkins-hbase4:41955] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-19 05:15:23,707 INFO [RS:2;jenkins-hbase4:35969] server.AbstractConnector(383): Stopped ServerConnector@478dee5b{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-19 05:15:23,708 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-19 05:15:23,708 INFO [RS:1;jenkins-hbase4:41955] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@3c6fadb4{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-19 05:15:23,707 INFO [RS:0;jenkins-hbase4:46065] server.AbstractConnector(383): Stopped ServerConnector@1e767489{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-19 05:15:23,709 INFO [RS:1;jenkins-hbase4:41955] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@34c9e66a{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/4e3b5c94-7503-0e39-0800-596c759cd5e5/hadoop.log.dir/,STOPPED} 2023-07-19 05:15:23,709 INFO [RS:0;jenkins-hbase4:46065] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-19 05:15:23,708 INFO [RS:2;jenkins-hbase4:35969] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-19 05:15:23,708 INFO [RS:3;jenkins-hbase4:34765] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-19 05:15:23,710 INFO [RS:2;jenkins-hbase4:35969] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@33d0f78e{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-19 05:15:23,710 INFO [RS:0;jenkins-hbase4:46065] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@67712218{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-19 05:15:23,711 INFO [RS:2;jenkins-hbase4:35969] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@10bd5ffd{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/4e3b5c94-7503-0e39-0800-596c759cd5e5/hadoop.log.dir/,STOPPED} 2023-07-19 05:15:23,712 INFO [RS:0;jenkins-hbase4:46065] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@606e99a1{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/4e3b5c94-7503-0e39-0800-596c759cd5e5/hadoop.log.dir/,STOPPED} 2023-07-19 05:15:23,711 INFO [RS:1;jenkins-hbase4:41955] regionserver.HeapMemoryManager(220): Stopping 2023-07-19 05:15:23,711 INFO [RS:3;jenkins-hbase4:34765] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@4e2d3074{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-19 05:15:23,713 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-19 05:15:23,713 INFO [RS:1;jenkins-hbase4:41955] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-19 05:15:23,714 INFO [RS:1;jenkins-hbase4:41955] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-19 05:15:23,714 INFO [RS:0;jenkins-hbase4:46065] regionserver.HeapMemoryManager(220): Stopping 2023-07-19 05:15:23,713 INFO [RS:3;jenkins-hbase4:34765] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@23f6f579{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/4e3b5c94-7503-0e39-0800-596c759cd5e5/hadoop.log.dir/,STOPPED} 2023-07-19 05:15:23,714 INFO [RS:0;jenkins-hbase4:46065] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-19 05:15:23,714 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-19 05:15:23,714 INFO [RS:2;jenkins-hbase4:35969] regionserver.HeapMemoryManager(220): Stopping 2023-07-19 05:15:23,714 INFO [RS:1;jenkins-hbase4:41955] regionserver.HRegionServer(3305): Received CLOSE for b9b46c9a9a26be1b89c10c145434cfe8 2023-07-19 05:15:23,714 INFO [RS:2;jenkins-hbase4:35969] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-19 05:15:23,714 INFO [RS:3;jenkins-hbase4:34765] regionserver.HeapMemoryManager(220): Stopping 2023-07-19 05:15:23,714 INFO [RS:2;jenkins-hbase4:35969] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-19 05:15:23,714 INFO [RS:0;jenkins-hbase4:46065] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-19 05:15:23,715 INFO [RS:2;jenkins-hbase4:35969] regionserver.HRegionServer(3305): Received CLOSE for 83121727a02707589af30990e9f79713 2023-07-19 05:15:23,715 INFO [RS:0;jenkins-hbase4:46065] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,46065,1689743719751 2023-07-19 05:15:23,715 DEBUG [RS:0;jenkins-hbase4:46065] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x290a981c to 127.0.0.1:51693 2023-07-19 05:15:23,715 DEBUG [RS:0;jenkins-hbase4:46065] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-19 05:15:23,715 INFO [RS:0;jenkins-hbase4:46065] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-19 05:15:23,715 INFO [RS:0;jenkins-hbase4:46065] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-19 05:15:23,715 INFO [RS:0;jenkins-hbase4:46065] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-19 05:15:23,715 INFO [RS:0;jenkins-hbase4:46065] regionserver.HRegionServer(3305): Received CLOSE for 1588230740 2023-07-19 05:15:23,715 INFO [RS:1;jenkins-hbase4:41955] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,41955,1689743719898 2023-07-19 05:15:23,715 DEBUG [RS:1;jenkins-hbase4:41955] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x2aae53b8 to 127.0.0.1:51693 2023-07-19 05:15:23,715 DEBUG [RS:1;jenkins-hbase4:41955] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-19 05:15:23,715 INFO [RS:0;jenkins-hbase4:46065] regionserver.HRegionServer(1474): Waiting on 1 regions to close 2023-07-19 05:15:23,715 INFO [RS:1;jenkins-hbase4:41955] regionserver.HRegionServer(1474): Waiting on 1 regions to close 2023-07-19 05:15:23,715 DEBUG [RS:1;jenkins-hbase4:41955] regionserver.HRegionServer(1478): Online Regions={b9b46c9a9a26be1b89c10c145434cfe8=hbase:namespace,,1689743720895.b9b46c9a9a26be1b89c10c145434cfe8.} 2023-07-19 05:15:23,715 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-19 05:15:23,716 DEBUG [RS:1;jenkins-hbase4:41955] regionserver.HRegionServer(1504): Waiting on b9b46c9a9a26be1b89c10c145434cfe8 2023-07-19 05:15:23,716 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-19 05:15:23,715 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing b9b46c9a9a26be1b89c10c145434cfe8, disabling compactions & flushes 2023-07-19 05:15:23,716 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-19 05:15:23,716 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1689743720895.b9b46c9a9a26be1b89c10c145434cfe8. 2023-07-19 05:15:23,716 INFO [RS:3;jenkins-hbase4:34765] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-19 05:15:23,716 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-19 05:15:23,716 INFO [RS:2;jenkins-hbase4:35969] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,35969,1689743720058 2023-07-19 05:15:23,715 DEBUG [RS:0;jenkins-hbase4:46065] regionserver.HRegionServer(1478): Online Regions={1588230740=hbase:meta,,1.1588230740} 2023-07-19 05:15:23,716 DEBUG [RS:2;jenkins-hbase4:35969] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x27fae21c to 127.0.0.1:51693 2023-07-19 05:15:23,716 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-19 05:15:23,716 INFO [RS:3;jenkins-hbase4:34765] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-19 05:15:23,716 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1689743720895.b9b46c9a9a26be1b89c10c145434cfe8. 2023-07-19 05:15:23,716 INFO [RS:3;jenkins-hbase4:34765] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,34765,1689743721451 2023-07-19 05:15:23,716 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-19 05:15:23,716 DEBUG [RS:2;jenkins-hbase4:35969] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-19 05:15:23,716 DEBUG [RS:0;jenkins-hbase4:46065] regionserver.HRegionServer(1504): Waiting on 1588230740 2023-07-19 05:15:23,716 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 83121727a02707589af30990e9f79713, disabling compactions & flushes 2023-07-19 05:15:23,717 INFO [RS:2;jenkins-hbase4:35969] regionserver.HRegionServer(1474): Waiting on 1 regions to close 2023-07-19 05:15:23,716 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=4.51 KB heapSize=8.81 KB 2023-07-19 05:15:23,716 DEBUG [RS:3;jenkins-hbase4:34765] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x504efecf to 127.0.0.1:51693 2023-07-19 05:15:23,716 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1689743720895.b9b46c9a9a26be1b89c10c145434cfe8. after waiting 0 ms 2023-07-19 05:15:23,717 DEBUG [RS:3;jenkins-hbase4:34765] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-19 05:15:23,717 DEBUG [RS:2;jenkins-hbase4:35969] regionserver.HRegionServer(1478): Online Regions={83121727a02707589af30990e9f79713=hbase:rsgroup,,1689743721041.83121727a02707589af30990e9f79713.} 2023-07-19 05:15:23,717 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1689743721041.83121727a02707589af30990e9f79713. 2023-07-19 05:15:23,717 DEBUG [RS:2;jenkins-hbase4:35969] regionserver.HRegionServer(1504): Waiting on 83121727a02707589af30990e9f79713 2023-07-19 05:15:23,717 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1689743721041.83121727a02707589af30990e9f79713. 2023-07-19 05:15:23,717 INFO [RS:3;jenkins-hbase4:34765] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,34765,1689743721451; all regions closed. 2023-07-19 05:15:23,717 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1689743720895.b9b46c9a9a26be1b89c10c145434cfe8. 2023-07-19 05:15:23,717 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1689743721041.83121727a02707589af30990e9f79713. after waiting 0 ms 2023-07-19 05:15:23,717 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing b9b46c9a9a26be1b89c10c145434cfe8 1/1 column families, dataSize=267 B heapSize=904 B 2023-07-19 05:15:23,717 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1689743721041.83121727a02707589af30990e9f79713. 2023-07-19 05:15:23,717 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 83121727a02707589af30990e9f79713 1/1 column families, dataSize=6.43 KB heapSize=10.63 KB 2023-07-19 05:15:23,729 DEBUG [RS:3;jenkins-hbase4:34765] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/760309a7-bdb9-f7fa-7419-a01ae2b00b2f/oldWALs 2023-07-19 05:15:23,729 INFO [RS:3;jenkins-hbase4:34765] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C34765%2C1689743721451:(num 1689743721766) 2023-07-19 05:15:23,729 DEBUG [RS:3;jenkins-hbase4:34765] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-19 05:15:23,729 INFO [RS:3;jenkins-hbase4:34765] regionserver.LeaseManager(133): Closed leases 2023-07-19 05:15:23,732 INFO [RS:3;jenkins-hbase4:34765] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-07-19 05:15:23,732 INFO [RS:3;jenkins-hbase4:34765] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-19 05:15:23,732 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-19 05:15:23,732 INFO [RS:3;jenkins-hbase4:34765] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-19 05:15:23,733 INFO [RS:3;jenkins-hbase4:34765] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-19 05:15:23,737 INFO [RS:3;jenkins-hbase4:34765] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:34765 2023-07-19 05:15:23,739 DEBUG [Listener at localhost/42441-EventThread] zookeeper.ZKWatcher(600): regionserver:46065-0x1017c017df10001, quorum=127.0.0.1:51693, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,34765,1689743721451 2023-07-19 05:15:23,739 DEBUG [Listener at localhost/42441-EventThread] zookeeper.ZKWatcher(600): regionserver:34765-0x1017c017df1000b, quorum=127.0.0.1:51693, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,34765,1689743721451 2023-07-19 05:15:23,739 DEBUG [Listener at localhost/42441-EventThread] zookeeper.ZKWatcher(600): regionserver:46065-0x1017c017df10001, quorum=127.0.0.1:51693, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-19 05:15:23,739 DEBUG [Listener at localhost/42441-EventThread] zookeeper.ZKWatcher(600): regionserver:35969-0x1017c017df10003, quorum=127.0.0.1:51693, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,34765,1689743721451 2023-07-19 05:15:23,739 DEBUG [Listener at localhost/42441-EventThread] zookeeper.ZKWatcher(600): regionserver:34765-0x1017c017df1000b, quorum=127.0.0.1:51693, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-19 05:15:23,739 DEBUG [Listener at localhost/42441-EventThread] zookeeper.ZKWatcher(600): regionserver:35969-0x1017c017df10003, quorum=127.0.0.1:51693, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-19 05:15:23,741 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-19 05:15:23,743 DEBUG [Listener at localhost/42441-EventThread] zookeeper.ZKWatcher(600): regionserver:41955-0x1017c017df10002, quorum=127.0.0.1:51693, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,34765,1689743721451 2023-07-19 05:15:23,743 DEBUG [Listener at localhost/42441-EventThread] zookeeper.ZKWatcher(600): master:39261-0x1017c017df10000, quorum=127.0.0.1:51693, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-19 05:15:23,743 DEBUG [Listener at localhost/42441-EventThread] zookeeper.ZKWatcher(600): regionserver:41955-0x1017c017df10002, quorum=127.0.0.1:51693, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-19 05:15:23,744 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,34765,1689743721451] 2023-07-19 05:15:23,744 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,34765,1689743721451; numProcessing=1 2023-07-19 05:15:23,746 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,34765,1689743721451 already deleted, retry=false 2023-07-19 05:15:23,746 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,34765,1689743721451 expired; onlineServers=3 2023-07-19 05:15:23,763 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=267 B at sequenceid=9 (bloomFilter=true), to=hdfs://localhost:44175/user/jenkins/test-data/760309a7-bdb9-f7fa-7419-a01ae2b00b2f/data/hbase/namespace/b9b46c9a9a26be1b89c10c145434cfe8/.tmp/info/28077f92c83640b1af7b10050c5d23e1 2023-07-19 05:15:23,763 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=4.01 KB at sequenceid=26 (bloomFilter=false), to=hdfs://localhost:44175/user/jenkins/test-data/760309a7-bdb9-f7fa-7419-a01ae2b00b2f/data/hbase/meta/1588230740/.tmp/info/fa932a76f1a5499e8fd5b0b90b597c81 2023-07-19 05:15:23,771 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=6.43 KB at sequenceid=29 (bloomFilter=true), to=hdfs://localhost:44175/user/jenkins/test-data/760309a7-bdb9-f7fa-7419-a01ae2b00b2f/data/hbase/rsgroup/83121727a02707589af30990e9f79713/.tmp/m/dab631d7c71d4c59b92a9b4deb2a1994 2023-07-19 05:15:23,772 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for fa932a76f1a5499e8fd5b0b90b597c81 2023-07-19 05:15:23,773 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 28077f92c83640b1af7b10050c5d23e1 2023-07-19 05:15:23,774 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:44175/user/jenkins/test-data/760309a7-bdb9-f7fa-7419-a01ae2b00b2f/data/hbase/namespace/b9b46c9a9a26be1b89c10c145434cfe8/.tmp/info/28077f92c83640b1af7b10050c5d23e1 as hdfs://localhost:44175/user/jenkins/test-data/760309a7-bdb9-f7fa-7419-a01ae2b00b2f/data/hbase/namespace/b9b46c9a9a26be1b89c10c145434cfe8/info/28077f92c83640b1af7b10050c5d23e1 2023-07-19 05:15:23,777 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for dab631d7c71d4c59b92a9b4deb2a1994 2023-07-19 05:15:23,778 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:44175/user/jenkins/test-data/760309a7-bdb9-f7fa-7419-a01ae2b00b2f/data/hbase/rsgroup/83121727a02707589af30990e9f79713/.tmp/m/dab631d7c71d4c59b92a9b4deb2a1994 as hdfs://localhost:44175/user/jenkins/test-data/760309a7-bdb9-f7fa-7419-a01ae2b00b2f/data/hbase/rsgroup/83121727a02707589af30990e9f79713/m/dab631d7c71d4c59b92a9b4deb2a1994 2023-07-19 05:15:23,780 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 28077f92c83640b1af7b10050c5d23e1 2023-07-19 05:15:23,780 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:44175/user/jenkins/test-data/760309a7-bdb9-f7fa-7419-a01ae2b00b2f/data/hbase/namespace/b9b46c9a9a26be1b89c10c145434cfe8/info/28077f92c83640b1af7b10050c5d23e1, entries=3, sequenceid=9, filesize=5.0 K 2023-07-19 05:15:23,781 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~267 B/267, heapSize ~888 B/888, currentSize=0 B/0 for b9b46c9a9a26be1b89c10c145434cfe8 in 64ms, sequenceid=9, compaction requested=false 2023-07-19 05:15:23,787 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for dab631d7c71d4c59b92a9b4deb2a1994 2023-07-19 05:15:23,788 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:44175/user/jenkins/test-data/760309a7-bdb9-f7fa-7419-a01ae2b00b2f/data/hbase/rsgroup/83121727a02707589af30990e9f79713/m/dab631d7c71d4c59b92a9b4deb2a1994, entries=12, sequenceid=29, filesize=5.4 K 2023-07-19 05:15:23,788 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~6.43 KB/6586, heapSize ~10.61 KB/10864, currentSize=0 B/0 for 83121727a02707589af30990e9f79713 in 71ms, sequenceid=29, compaction requested=false 2023-07-19 05:15:23,798 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-19 05:15:23,799 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44175/user/jenkins/test-data/760309a7-bdb9-f7fa-7419-a01ae2b00b2f/data/hbase/namespace/b9b46c9a9a26be1b89c10c145434cfe8/recovered.edits/12.seqid, newMaxSeqId=12, maxSeqId=1 2023-07-19 05:15:23,800 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:namespace,,1689743720895.b9b46c9a9a26be1b89c10c145434cfe8. 2023-07-19 05:15:23,800 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for b9b46c9a9a26be1b89c10c145434cfe8: 2023-07-19 05:15:23,800 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:namespace,,1689743720895.b9b46c9a9a26be1b89c10c145434cfe8. 2023-07-19 05:15:23,802 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=82 B at sequenceid=26 (bloomFilter=false), to=hdfs://localhost:44175/user/jenkins/test-data/760309a7-bdb9-f7fa-7419-a01ae2b00b2f/data/hbase/meta/1588230740/.tmp/rep_barrier/b824206302424c719b7323cb2d06c95c 2023-07-19 05:15:23,802 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44175/user/jenkins/test-data/760309a7-bdb9-f7fa-7419-a01ae2b00b2f/data/hbase/rsgroup/83121727a02707589af30990e9f79713/recovered.edits/32.seqid, newMaxSeqId=32, maxSeqId=1 2023-07-19 05:15:23,803 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-19 05:15:23,804 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1689743721041.83121727a02707589af30990e9f79713. 2023-07-19 05:15:23,804 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 83121727a02707589af30990e9f79713: 2023-07-19 05:15:23,804 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:rsgroup,,1689743721041.83121727a02707589af30990e9f79713. 2023-07-19 05:15:23,808 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for b824206302424c719b7323cb2d06c95c 2023-07-19 05:15:23,818 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=428 B at sequenceid=26 (bloomFilter=false), to=hdfs://localhost:44175/user/jenkins/test-data/760309a7-bdb9-f7fa-7419-a01ae2b00b2f/data/hbase/meta/1588230740/.tmp/table/ce36ac7e54b4433da80bde56ddcf17f3 2023-07-19 05:15:23,823 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for ce36ac7e54b4433da80bde56ddcf17f3 2023-07-19 05:15:23,823 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:44175/user/jenkins/test-data/760309a7-bdb9-f7fa-7419-a01ae2b00b2f/data/hbase/meta/1588230740/.tmp/info/fa932a76f1a5499e8fd5b0b90b597c81 as hdfs://localhost:44175/user/jenkins/test-data/760309a7-bdb9-f7fa-7419-a01ae2b00b2f/data/hbase/meta/1588230740/info/fa932a76f1a5499e8fd5b0b90b597c81 2023-07-19 05:15:23,828 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for fa932a76f1a5499e8fd5b0b90b597c81 2023-07-19 05:15:23,828 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:44175/user/jenkins/test-data/760309a7-bdb9-f7fa-7419-a01ae2b00b2f/data/hbase/meta/1588230740/info/fa932a76f1a5499e8fd5b0b90b597c81, entries=22, sequenceid=26, filesize=7.3 K 2023-07-19 05:15:23,829 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:44175/user/jenkins/test-data/760309a7-bdb9-f7fa-7419-a01ae2b00b2f/data/hbase/meta/1588230740/.tmp/rep_barrier/b824206302424c719b7323cb2d06c95c as hdfs://localhost:44175/user/jenkins/test-data/760309a7-bdb9-f7fa-7419-a01ae2b00b2f/data/hbase/meta/1588230740/rep_barrier/b824206302424c719b7323cb2d06c95c 2023-07-19 05:15:23,833 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for b824206302424c719b7323cb2d06c95c 2023-07-19 05:15:23,834 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:44175/user/jenkins/test-data/760309a7-bdb9-f7fa-7419-a01ae2b00b2f/data/hbase/meta/1588230740/rep_barrier/b824206302424c719b7323cb2d06c95c, entries=1, sequenceid=26, filesize=4.9 K 2023-07-19 05:15:23,834 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:44175/user/jenkins/test-data/760309a7-bdb9-f7fa-7419-a01ae2b00b2f/data/hbase/meta/1588230740/.tmp/table/ce36ac7e54b4433da80bde56ddcf17f3 as hdfs://localhost:44175/user/jenkins/test-data/760309a7-bdb9-f7fa-7419-a01ae2b00b2f/data/hbase/meta/1588230740/table/ce36ac7e54b4433da80bde56ddcf17f3 2023-07-19 05:15:23,840 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for ce36ac7e54b4433da80bde56ddcf17f3 2023-07-19 05:15:23,840 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:44175/user/jenkins/test-data/760309a7-bdb9-f7fa-7419-a01ae2b00b2f/data/hbase/meta/1588230740/table/ce36ac7e54b4433da80bde56ddcf17f3, entries=6, sequenceid=26, filesize=5.1 K 2023-07-19 05:15:23,841 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~4.51 KB/4614, heapSize ~8.77 KB/8976, currentSize=0 B/0 for 1588230740 in 125ms, sequenceid=26, compaction requested=false 2023-07-19 05:15:23,851 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44175/user/jenkins/test-data/760309a7-bdb9-f7fa-7419-a01ae2b00b2f/data/hbase/meta/1588230740/recovered.edits/29.seqid, newMaxSeqId=29, maxSeqId=1 2023-07-19 05:15:23,851 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-19 05:15:23,853 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-19 05:15:23,853 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-19 05:15:23,853 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:meta,,1.1588230740 2023-07-19 05:15:23,899 DEBUG [Listener at localhost/42441-EventThread] zookeeper.ZKWatcher(600): regionserver:34765-0x1017c017df1000b, quorum=127.0.0.1:51693, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-19 05:15:23,899 INFO [RS:3;jenkins-hbase4:34765] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,34765,1689743721451; zookeeper connection closed. 2023-07-19 05:15:23,899 DEBUG [Listener at localhost/42441-EventThread] zookeeper.ZKWatcher(600): regionserver:34765-0x1017c017df1000b, quorum=127.0.0.1:51693, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-19 05:15:23,899 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@148795c] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@148795c 2023-07-19 05:15:23,916 INFO [RS:1;jenkins-hbase4:41955] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,41955,1689743719898; all regions closed. 2023-07-19 05:15:23,917 INFO [RS:0;jenkins-hbase4:46065] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,46065,1689743719751; all regions closed. 2023-07-19 05:15:23,917 INFO [RS:2;jenkins-hbase4:35969] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,35969,1689743720058; all regions closed. 2023-07-19 05:15:23,924 DEBUG [RS:1;jenkins-hbase4:41955] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/760309a7-bdb9-f7fa-7419-a01ae2b00b2f/oldWALs 2023-07-19 05:15:23,924 INFO [RS:1;jenkins-hbase4:41955] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C41955%2C1689743719898:(num 1689743720743) 2023-07-19 05:15:23,924 DEBUG [RS:1;jenkins-hbase4:41955] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-19 05:15:23,924 INFO [RS:1;jenkins-hbase4:41955] regionserver.LeaseManager(133): Closed leases 2023-07-19 05:15:23,924 INFO [RS:1;jenkins-hbase4:41955] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-07-19 05:15:23,925 INFO [RS:1;jenkins-hbase4:41955] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-19 05:15:23,925 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-19 05:15:23,925 INFO [RS:1;jenkins-hbase4:41955] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-19 05:15:23,925 INFO [RS:1;jenkins-hbase4:41955] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-19 05:15:23,926 INFO [RS:1;jenkins-hbase4:41955] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:41955 2023-07-19 05:15:23,927 DEBUG [RS:0;jenkins-hbase4:46065] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/760309a7-bdb9-f7fa-7419-a01ae2b00b2f/oldWALs 2023-07-19 05:15:23,927 INFO [RS:0;jenkins-hbase4:46065] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C46065%2C1689743719751.meta:.meta(num 1689743720835) 2023-07-19 05:15:23,927 DEBUG [Listener at localhost/42441-EventThread] zookeeper.ZKWatcher(600): regionserver:46065-0x1017c017df10001, quorum=127.0.0.1:51693, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,41955,1689743719898 2023-07-19 05:15:23,927 DEBUG [Listener at localhost/42441-EventThread] zookeeper.ZKWatcher(600): regionserver:35969-0x1017c017df10003, quorum=127.0.0.1:51693, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,41955,1689743719898 2023-07-19 05:15:23,927 DEBUG [Listener at localhost/42441-EventThread] zookeeper.ZKWatcher(600): master:39261-0x1017c017df10000, quorum=127.0.0.1:51693, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-19 05:15:23,927 DEBUG [Listener at localhost/42441-EventThread] zookeeper.ZKWatcher(600): regionserver:41955-0x1017c017df10002, quorum=127.0.0.1:51693, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,41955,1689743719898 2023-07-19 05:15:23,928 DEBUG [RS:2;jenkins-hbase4:35969] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/760309a7-bdb9-f7fa-7419-a01ae2b00b2f/oldWALs 2023-07-19 05:15:23,928 INFO [RS:2;jenkins-hbase4:35969] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C35969%2C1689743720058:(num 1689743720736) 2023-07-19 05:15:23,928 DEBUG [RS:2;jenkins-hbase4:35969] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-19 05:15:23,928 INFO [RS:2;jenkins-hbase4:35969] regionserver.LeaseManager(133): Closed leases 2023-07-19 05:15:23,928 INFO [RS:2;jenkins-hbase4:35969] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-07-19 05:15:23,928 INFO [RS:2;jenkins-hbase4:35969] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-19 05:15:23,928 INFO [RS:2;jenkins-hbase4:35969] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-19 05:15:23,928 INFO [RS:2;jenkins-hbase4:35969] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-19 05:15:23,929 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-19 05:15:23,930 INFO [RS:2;jenkins-hbase4:35969] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:35969 2023-07-19 05:15:23,930 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,41955,1689743719898] 2023-07-19 05:15:23,930 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,41955,1689743719898; numProcessing=2 2023-07-19 05:15:23,933 DEBUG [RS:0;jenkins-hbase4:46065] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/760309a7-bdb9-f7fa-7419-a01ae2b00b2f/oldWALs 2023-07-19 05:15:23,933 INFO [RS:0;jenkins-hbase4:46065] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C46065%2C1689743719751:(num 1689743720736) 2023-07-19 05:15:23,933 DEBUG [RS:0;jenkins-hbase4:46065] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-19 05:15:23,933 INFO [RS:0;jenkins-hbase4:46065] regionserver.LeaseManager(133): Closed leases 2023-07-19 05:15:23,933 INFO [RS:0;jenkins-hbase4:46065] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-07-19 05:15:23,933 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-19 05:15:23,934 INFO [RS:0;jenkins-hbase4:46065] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:46065 2023-07-19 05:15:24,030 DEBUG [Listener at localhost/42441-EventThread] zookeeper.ZKWatcher(600): regionserver:41955-0x1017c017df10002, quorum=127.0.0.1:51693, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-19 05:15:24,030 DEBUG [Listener at localhost/42441-EventThread] zookeeper.ZKWatcher(600): regionserver:41955-0x1017c017df10002, quorum=127.0.0.1:51693, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-19 05:15:24,030 INFO [RS:1;jenkins-hbase4:41955] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,41955,1689743719898; zookeeper connection closed. 2023-07-19 05:15:24,030 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@660dc660] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@660dc660 2023-07-19 05:15:24,031 DEBUG [Listener at localhost/42441-EventThread] zookeeper.ZKWatcher(600): regionserver:46065-0x1017c017df10001, quorum=127.0.0.1:51693, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,35969,1689743720058 2023-07-19 05:15:24,031 DEBUG [Listener at localhost/42441-EventThread] zookeeper.ZKWatcher(600): regionserver:46065-0x1017c017df10001, quorum=127.0.0.1:51693, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,46065,1689743719751 2023-07-19 05:15:24,031 DEBUG [Listener at localhost/42441-EventThread] zookeeper.ZKWatcher(600): master:39261-0x1017c017df10000, quorum=127.0.0.1:51693, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-19 05:15:24,031 DEBUG [Listener at localhost/42441-EventThread] zookeeper.ZKWatcher(600): regionserver:35969-0x1017c017df10003, quorum=127.0.0.1:51693, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,35969,1689743720058 2023-07-19 05:15:24,031 DEBUG [Listener at localhost/42441-EventThread] zookeeper.ZKWatcher(600): regionserver:35969-0x1017c017df10003, quorum=127.0.0.1:51693, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,46065,1689743719751 2023-07-19 05:15:24,032 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,41955,1689743719898 already deleted, retry=false 2023-07-19 05:15:24,032 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,41955,1689743719898 expired; onlineServers=2 2023-07-19 05:15:24,032 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,35969,1689743720058] 2023-07-19 05:15:24,032 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,35969,1689743720058; numProcessing=3 2023-07-19 05:15:24,034 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,35969,1689743720058 already deleted, retry=false 2023-07-19 05:15:24,035 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,35969,1689743720058 expired; onlineServers=1 2023-07-19 05:15:24,035 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,46065,1689743719751] 2023-07-19 05:15:24,035 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,46065,1689743719751; numProcessing=4 2023-07-19 05:15:24,036 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,46065,1689743719751 already deleted, retry=false 2023-07-19 05:15:24,036 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,46065,1689743719751 expired; onlineServers=0 2023-07-19 05:15:24,036 INFO [RegionServerTracker-0] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,39261,1689743719572' ***** 2023-07-19 05:15:24,036 INFO [RegionServerTracker-0] regionserver.HRegionServer(2311): STOPPED: Cluster shutdown set; onlineServer=0 2023-07-19 05:15:24,037 DEBUG [M:0;jenkins-hbase4:39261] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@28426ca2, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-19 05:15:24,037 INFO [M:0;jenkins-hbase4:39261] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-19 05:15:24,040 INFO [M:0;jenkins-hbase4:39261] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@73d7f31e{master,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/master} 2023-07-19 05:15:24,040 DEBUG [Listener at localhost/42441-EventThread] zookeeper.ZKWatcher(600): master:39261-0x1017c017df10000, quorum=127.0.0.1:51693, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/master 2023-07-19 05:15:24,040 DEBUG [Listener at localhost/42441-EventThread] zookeeper.ZKWatcher(600): master:39261-0x1017c017df10000, quorum=127.0.0.1:51693, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-19 05:15:24,040 INFO [M:0;jenkins-hbase4:39261] server.AbstractConnector(383): Stopped ServerConnector@147da2bf{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-19 05:15:24,040 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:39261-0x1017c017df10000, quorum=127.0.0.1:51693, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-19 05:15:24,040 INFO [M:0;jenkins-hbase4:39261] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-19 05:15:24,041 INFO [M:0;jenkins-hbase4:39261] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@56385464{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-19 05:15:24,041 INFO [M:0;jenkins-hbase4:39261] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@1194b7d4{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/4e3b5c94-7503-0e39-0800-596c759cd5e5/hadoop.log.dir/,STOPPED} 2023-07-19 05:15:24,041 INFO [M:0;jenkins-hbase4:39261] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,39261,1689743719572 2023-07-19 05:15:24,041 INFO [M:0;jenkins-hbase4:39261] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,39261,1689743719572; all regions closed. 2023-07-19 05:15:24,042 DEBUG [M:0;jenkins-hbase4:39261] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-19 05:15:24,042 INFO [M:0;jenkins-hbase4:39261] master.HMaster(1491): Stopping master jetty server 2023-07-19 05:15:24,042 INFO [M:0;jenkins-hbase4:39261] server.AbstractConnector(383): Stopped ServerConnector@4df21cd6{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-19 05:15:24,042 DEBUG [M:0;jenkins-hbase4:39261] cleaner.LogCleaner(198): Cancelling LogCleaner 2023-07-19 05:15:24,043 DEBUG [M:0;jenkins-hbase4:39261] cleaner.HFileCleaner(317): Stopping file delete threads 2023-07-19 05:15:24,043 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1689743720452] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1689743720452,5,FailOnTimeoutGroup] 2023-07-19 05:15:24,043 WARN [OldWALsCleaner-0] cleaner.LogCleaner(186): Interrupted while cleaning old WALs, will try to clean it next round. Exiting. 2023-07-19 05:15:24,043 INFO [M:0;jenkins-hbase4:39261] master.MasterMobCompactionThread(168): Waiting for Mob Compaction Thread to finish... 2023-07-19 05:15:24,043 INFO [M:0;jenkins-hbase4:39261] master.MasterMobCompactionThread(168): Waiting for Region Server Mob Compaction Thread to finish... 2023-07-19 05:15:24,043 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1689743720452] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1689743720452,5,FailOnTimeoutGroup] 2023-07-19 05:15:24,043 INFO [M:0;jenkins-hbase4:39261] hbase.ChoreService(369): Chore service for: master/jenkins-hbase4:0 had [] on shutdown 2023-07-19 05:15:24,043 DEBUG [M:0;jenkins-hbase4:39261] master.HMaster(1512): Stopping service threads 2023-07-19 05:15:24,043 INFO [M:0;jenkins-hbase4:39261] procedure2.RemoteProcedureDispatcher(119): Stopping procedure remote dispatcher 2023-07-19 05:15:24,043 ERROR [M:0;jenkins-hbase4:39261] procedure2.ProcedureExecutor(653): ThreadGroup java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] contains running threads; null: See STDOUT java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] 2023-07-19 05:15:24,043 INFO [M:0;jenkins-hbase4:39261] region.RegionProcedureStore(113): Stopping the Region Procedure Store, isAbort=false 2023-07-19 05:15:24,043 DEBUG [normalizer-worker-0] normalizer.RegionNormalizerWorker(174): interrupt detected. terminating. 2023-07-19 05:15:24,044 DEBUG [M:0;jenkins-hbase4:39261] zookeeper.ZKUtil(398): master:39261-0x1017c017df10000, quorum=127.0.0.1:51693, baseZNode=/hbase Unable to get data of znode /hbase/master because node does not exist (not an error) 2023-07-19 05:15:24,044 WARN [M:0;jenkins-hbase4:39261] master.ActiveMasterManager(326): Failed get of master address: java.io.IOException: Can't get master address from ZooKeeper; znode data == null 2023-07-19 05:15:24,044 INFO [M:0;jenkins-hbase4:39261] assignment.AssignmentManager(315): Stopping assignment manager 2023-07-19 05:15:24,044 INFO [M:0;jenkins-hbase4:39261] region.MasterRegion(167): Closing local region {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, isAbort=false 2023-07-19 05:15:24,044 DEBUG [M:0;jenkins-hbase4:39261] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-07-19 05:15:24,044 INFO [M:0;jenkins-hbase4:39261] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-19 05:15:24,044 DEBUG [M:0;jenkins-hbase4:39261] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-19 05:15:24,044 DEBUG [M:0;jenkins-hbase4:39261] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-07-19 05:15:24,044 DEBUG [M:0;jenkins-hbase4:39261] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-19 05:15:24,044 INFO [M:0;jenkins-hbase4:39261] regionserver.HRegion(2745): Flushing 1595e783b53d99cd5eef43b6debb2682 1/1 column families, dataSize=76.20 KB heapSize=90.64 KB 2023-07-19 05:15:24,055 INFO [M:0;jenkins-hbase4:39261] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=76.20 KB at sequenceid=175 (bloomFilter=true), to=hdfs://localhost:44175/user/jenkins/test-data/760309a7-bdb9-f7fa-7419-a01ae2b00b2f/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/539205fb99934bf08e79b2efb6f9a204 2023-07-19 05:15:24,060 DEBUG [M:0;jenkins-hbase4:39261] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:44175/user/jenkins/test-data/760309a7-bdb9-f7fa-7419-a01ae2b00b2f/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/539205fb99934bf08e79b2efb6f9a204 as hdfs://localhost:44175/user/jenkins/test-data/760309a7-bdb9-f7fa-7419-a01ae2b00b2f/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/539205fb99934bf08e79b2efb6f9a204 2023-07-19 05:15:24,065 INFO [M:0;jenkins-hbase4:39261] regionserver.HStore(1080): Added hdfs://localhost:44175/user/jenkins/test-data/760309a7-bdb9-f7fa-7419-a01ae2b00b2f/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/539205fb99934bf08e79b2efb6f9a204, entries=22, sequenceid=175, filesize=11.1 K 2023-07-19 05:15:24,066 INFO [M:0;jenkins-hbase4:39261] regionserver.HRegion(2948): Finished flush of dataSize ~76.20 KB/78024, heapSize ~90.63 KB/92800, currentSize=0 B/0 for 1595e783b53d99cd5eef43b6debb2682 in 22ms, sequenceid=175, compaction requested=false 2023-07-19 05:15:24,068 INFO [M:0;jenkins-hbase4:39261] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-19 05:15:24,068 DEBUG [M:0;jenkins-hbase4:39261] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-19 05:15:24,072 INFO [master:store-WAL-Roller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-19 05:15:24,072 INFO [M:0;jenkins-hbase4:39261] flush.MasterFlushTableProcedureManager(83): stop: server shutting down. 2023-07-19 05:15:24,072 INFO [M:0;jenkins-hbase4:39261] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:39261 2023-07-19 05:15:24,074 DEBUG [M:0;jenkins-hbase4:39261] zookeeper.RecoverableZooKeeper(172): Node /hbase/rs/jenkins-hbase4.apache.org,39261,1689743719572 already deleted, retry=false 2023-07-19 05:15:24,500 DEBUG [Listener at localhost/42441-EventThread] zookeeper.ZKWatcher(600): master:39261-0x1017c017df10000, quorum=127.0.0.1:51693, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-19 05:15:24,500 INFO [M:0;jenkins-hbase4:39261] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,39261,1689743719572; zookeeper connection closed. 2023-07-19 05:15:24,500 DEBUG [Listener at localhost/42441-EventThread] zookeeper.ZKWatcher(600): master:39261-0x1017c017df10000, quorum=127.0.0.1:51693, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-19 05:15:24,600 DEBUG [Listener at localhost/42441-EventThread] zookeeper.ZKWatcher(600): regionserver:46065-0x1017c017df10001, quorum=127.0.0.1:51693, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-19 05:15:24,600 DEBUG [Listener at localhost/42441-EventThread] zookeeper.ZKWatcher(600): regionserver:46065-0x1017c017df10001, quorum=127.0.0.1:51693, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-19 05:15:24,600 INFO [RS:0;jenkins-hbase4:46065] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,46065,1689743719751; zookeeper connection closed. 2023-07-19 05:15:24,601 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@168d447f] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@168d447f 2023-07-19 05:15:24,701 DEBUG [Listener at localhost/42441-EventThread] zookeeper.ZKWatcher(600): regionserver:35969-0x1017c017df10003, quorum=127.0.0.1:51693, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-19 05:15:24,701 DEBUG [Listener at localhost/42441-EventThread] zookeeper.ZKWatcher(600): regionserver:35969-0x1017c017df10003, quorum=127.0.0.1:51693, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-19 05:15:24,701 INFO [RS:2;jenkins-hbase4:35969] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,35969,1689743720058; zookeeper connection closed. 2023-07-19 05:15:24,701 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@6cd7a757] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@6cd7a757 2023-07-19 05:15:24,701 INFO [Listener at localhost/42441] util.JVMClusterUtil(335): Shutdown of 1 master(s) and 4 regionserver(s) complete 2023-07-19 05:15:24,701 WARN [Listener at localhost/42441] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-19 05:15:24,705 INFO [Listener at localhost/42441] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-19 05:15:24,808 WARN [BP-983488047-172.31.14.131-1689743718450 heartbeating to localhost/127.0.0.1:44175] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-19 05:15:24,808 WARN [BP-983488047-172.31.14.131-1689743718450 heartbeating to localhost/127.0.0.1:44175] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-983488047-172.31.14.131-1689743718450 (Datanode Uuid e2dce496-373b-456d-b505-a160fc58264e) service to localhost/127.0.0.1:44175 2023-07-19 05:15:24,808 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/4e3b5c94-7503-0e39-0800-596c759cd5e5/cluster_505e0513-5acf-88d4-b472-60a449ac459e/dfs/data/data5/current/BP-983488047-172.31.14.131-1689743718450] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-19 05:15:24,809 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/4e3b5c94-7503-0e39-0800-596c759cd5e5/cluster_505e0513-5acf-88d4-b472-60a449ac459e/dfs/data/data6/current/BP-983488047-172.31.14.131-1689743718450] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-19 05:15:24,810 WARN [Listener at localhost/42441] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-19 05:15:24,813 INFO [Listener at localhost/42441] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-19 05:15:24,916 WARN [BP-983488047-172.31.14.131-1689743718450 heartbeating to localhost/127.0.0.1:44175] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-19 05:15:24,916 WARN [BP-983488047-172.31.14.131-1689743718450 heartbeating to localhost/127.0.0.1:44175] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-983488047-172.31.14.131-1689743718450 (Datanode Uuid 06352d53-0036-40f4-8dee-0816b87884b2) service to localhost/127.0.0.1:44175 2023-07-19 05:15:24,917 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/4e3b5c94-7503-0e39-0800-596c759cd5e5/cluster_505e0513-5acf-88d4-b472-60a449ac459e/dfs/data/data3/current/BP-983488047-172.31.14.131-1689743718450] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-19 05:15:24,917 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/4e3b5c94-7503-0e39-0800-596c759cd5e5/cluster_505e0513-5acf-88d4-b472-60a449ac459e/dfs/data/data4/current/BP-983488047-172.31.14.131-1689743718450] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-19 05:15:24,918 WARN [Listener at localhost/42441] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-19 05:15:24,921 INFO [Listener at localhost/42441] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-19 05:15:25,024 WARN [BP-983488047-172.31.14.131-1689743718450 heartbeating to localhost/127.0.0.1:44175] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-19 05:15:25,025 WARN [BP-983488047-172.31.14.131-1689743718450 heartbeating to localhost/127.0.0.1:44175] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-983488047-172.31.14.131-1689743718450 (Datanode Uuid f68aab75-1165-4ce2-bc20-c73dccd09c21) service to localhost/127.0.0.1:44175 2023-07-19 05:15:25,025 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/4e3b5c94-7503-0e39-0800-596c759cd5e5/cluster_505e0513-5acf-88d4-b472-60a449ac459e/dfs/data/data1/current/BP-983488047-172.31.14.131-1689743718450] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-19 05:15:25,026 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/4e3b5c94-7503-0e39-0800-596c759cd5e5/cluster_505e0513-5acf-88d4-b472-60a449ac459e/dfs/data/data2/current/BP-983488047-172.31.14.131-1689743718450] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-19 05:15:25,037 INFO [Listener at localhost/42441] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-19 05:15:25,157 INFO [Listener at localhost/42441] zookeeper.MiniZooKeeperCluster(344): Shutdown MiniZK cluster with all ZK servers 2023-07-19 05:15:25,195 INFO [Listener at localhost/42441] hbase.HBaseTestingUtility(1293): Minicluster is down