2023-07-22 07:10:18,349 DEBUG [main] hbase.HBaseTestingUtility(342): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1e6bc595-35d7-277c-1036-cd06713ba4c7 2023-07-22 07:10:18,370 INFO [main] hbase.HBaseClassTestRule(94): Test class org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1 timeout: 13 mins 2023-07-22 07:10:18,390 INFO [Time-limited test] hbase.HBaseTestingUtility(1068): Starting up minicluster with option: StartMiniClusterOption{numMasters=1, masterClass=null, numRegionServers=3, rsPorts=, rsClass=null, numDataNodes=3, dataNodeHosts=null, numZkServers=1, createRootDir=false, createWALDir=false} 2023-07-22 07:10:18,391 INFO [Time-limited test] hbase.HBaseZKTestingUtility(82): Created new mini-cluster data directory: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1e6bc595-35d7-277c-1036-cd06713ba4c7/cluster_00bb9b6e-43e0-f121-1ccb-023e0a721165, deleteOnExit=true 2023-07-22 07:10:18,391 INFO [Time-limited test] hbase.HBaseTestingUtility(1082): STARTING DFS 2023-07-22 07:10:18,392 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting test.cache.data to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1e6bc595-35d7-277c-1036-cd06713ba4c7/test.cache.data in system properties and HBase conf 2023-07-22 07:10:18,392 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting hadoop.tmp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1e6bc595-35d7-277c-1036-cd06713ba4c7/hadoop.tmp.dir in system properties and HBase conf 2023-07-22 07:10:18,393 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting hadoop.log.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1e6bc595-35d7-277c-1036-cd06713ba4c7/hadoop.log.dir in system properties and HBase conf 2023-07-22 07:10:18,394 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.local.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1e6bc595-35d7-277c-1036-cd06713ba4c7/mapreduce.cluster.local.dir in system properties and HBase conf 2023-07-22 07:10:18,394 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.temp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1e6bc595-35d7-277c-1036-cd06713ba4c7/mapreduce.cluster.temp.dir in system properties and HBase conf 2023-07-22 07:10:18,394 INFO [Time-limited test] hbase.HBaseTestingUtility(759): read short circuit is OFF 2023-07-22 07:10:18,505 WARN [Time-limited test] util.NativeCodeLoader(62): Unable to load native-hadoop library for your platform... using builtin-java classes where applicable 2023-07-22 07:10:18,948 DEBUG [Time-limited test] fs.HFileSystem(308): The file system is not a DistributedFileSystem. Skipping on block location reordering 2023-07-22 07:10:18,955 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.node-labels.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1e6bc595-35d7-277c-1036-cd06713ba4c7/yarn.node-labels.fs-store.root-dir in system properties and HBase conf 2023-07-22 07:10:18,955 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.node-attribute.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1e6bc595-35d7-277c-1036-cd06713ba4c7/yarn.node-attribute.fs-store.root-dir in system properties and HBase conf 2023-07-22 07:10:18,956 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.log-dirs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1e6bc595-35d7-277c-1036-cd06713ba4c7/yarn.nodemanager.log-dirs in system properties and HBase conf 2023-07-22 07:10:18,956 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1e6bc595-35d7-277c-1036-cd06713ba4c7/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-07-22 07:10:18,957 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.active-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1e6bc595-35d7-277c-1036-cd06713ba4c7/yarn.timeline-service.entity-group-fs-store.active-dir in system properties and HBase conf 2023-07-22 07:10:18,957 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.done-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1e6bc595-35d7-277c-1036-cd06713ba4c7/yarn.timeline-service.entity-group-fs-store.done-dir in system properties and HBase conf 2023-07-22 07:10:18,958 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1e6bc595-35d7-277c-1036-cd06713ba4c7/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-07-22 07:10:18,958 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1e6bc595-35d7-277c-1036-cd06713ba4c7/dfs.journalnode.edits.dir in system properties and HBase conf 2023-07-22 07:10:18,959 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting dfs.datanode.shared.file.descriptor.paths to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1e6bc595-35d7-277c-1036-cd06713ba4c7/dfs.datanode.shared.file.descriptor.paths in system properties and HBase conf 2023-07-22 07:10:18,959 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting nfs.dump.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1e6bc595-35d7-277c-1036-cd06713ba4c7/nfs.dump.dir in system properties and HBase conf 2023-07-22 07:10:18,959 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting java.io.tmpdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1e6bc595-35d7-277c-1036-cd06713ba4c7/java.io.tmpdir in system properties and HBase conf 2023-07-22 07:10:18,960 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1e6bc595-35d7-277c-1036-cd06713ba4c7/dfs.journalnode.edits.dir in system properties and HBase conf 2023-07-22 07:10:18,960 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting dfs.provided.aliasmap.inmemory.leveldb.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1e6bc595-35d7-277c-1036-cd06713ba4c7/dfs.provided.aliasmap.inmemory.leveldb.dir in system properties and HBase conf 2023-07-22 07:10:18,961 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting fs.s3a.committer.staging.tmp.path to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1e6bc595-35d7-277c-1036-cd06713ba4c7/fs.s3a.committer.staging.tmp.path in system properties and HBase conf Formatting using clusterid: testClusterID 2023-07-22 07:10:19,507 WARN [Time-limited test] conf.Configuration(1701): No unit for dfs.heartbeat.interval(3) assuming SECONDS 2023-07-22 07:10:19,511 WARN [Time-limited test] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-07-22 07:10:19,832 WARN [Time-limited test] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-namenode.properties,hadoop-metrics2.properties 2023-07-22 07:10:20,047 INFO [Time-limited test] log.Slf4jLog(67): Logging to org.slf4j.impl.Reload4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog 2023-07-22 07:10:20,063 WARN [Time-limited test] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-22 07:10:20,102 INFO [Time-limited test] log.Slf4jLog(67): jetty-6.1.26 2023-07-22 07:10:20,136 INFO [Time-limited test] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/hdfs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1e6bc595-35d7-277c-1036-cd06713ba4c7/java.io.tmpdir/Jetty_localhost_46247_hdfs____.b832sj/webapp 2023-07-22 07:10:20,295 INFO [Time-limited test] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:46247 2023-07-22 07:10:20,305 WARN [Time-limited test] conf.Configuration(1701): No unit for dfs.heartbeat.interval(3) assuming SECONDS 2023-07-22 07:10:20,306 WARN [Time-limited test] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-07-22 07:10:20,808 WARN [Listener at localhost/40817] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-22 07:10:20,880 WARN [Listener at localhost/40817] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-22 07:10:20,904 WARN [Listener at localhost/40817] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-22 07:10:20,911 INFO [Listener at localhost/40817] log.Slf4jLog(67): jetty-6.1.26 2023-07-22 07:10:20,915 INFO [Listener at localhost/40817] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1e6bc595-35d7-277c-1036-cd06713ba4c7/java.io.tmpdir/Jetty_localhost_33325_datanode____.rbhz7/webapp 2023-07-22 07:10:21,047 INFO [Listener at localhost/40817] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:33325 2023-07-22 07:10:21,442 WARN [Listener at localhost/39705] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-22 07:10:21,451 WARN [Listener at localhost/39705] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-22 07:10:21,455 WARN [Listener at localhost/39705] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-22 07:10:21,456 INFO [Listener at localhost/39705] log.Slf4jLog(67): jetty-6.1.26 2023-07-22 07:10:21,465 INFO [Listener at localhost/39705] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1e6bc595-35d7-277c-1036-cd06713ba4c7/java.io.tmpdir/Jetty_localhost_40691_datanode____bllry1/webapp 2023-07-22 07:10:21,565 INFO [Listener at localhost/39705] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:40691 2023-07-22 07:10:21,574 WARN [Listener at localhost/38761] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-22 07:10:21,593 WARN [Listener at localhost/38761] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-22 07:10:21,599 WARN [Listener at localhost/38761] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-22 07:10:21,601 INFO [Listener at localhost/38761] log.Slf4jLog(67): jetty-6.1.26 2023-07-22 07:10:21,605 INFO [Listener at localhost/38761] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1e6bc595-35d7-277c-1036-cd06713ba4c7/java.io.tmpdir/Jetty_localhost_41037_datanode____goecm6/webapp 2023-07-22 07:10:21,722 INFO [Listener at localhost/38761] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:41037 2023-07-22 07:10:21,733 WARN [Listener at localhost/46507] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-22 07:10:21,968 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x352ddedd683d350c: Processing first storage report for DS-3ac6edaa-2267-467e-8da8-ff002dee7b14 from datanode ff2595ce-0bca-45d5-828a-c0e725ce9d4b 2023-07-22 07:10:21,969 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x352ddedd683d350c: from storage DS-3ac6edaa-2267-467e-8da8-ff002dee7b14 node DatanodeRegistration(127.0.0.1:42555, datanodeUuid=ff2595ce-0bca-45d5-828a-c0e725ce9d4b, infoPort=43013, infoSecurePort=0, ipcPort=38761, storageInfo=lv=-57;cid=testClusterID;nsid=534840483;c=1690009819581), blocks: 0, hasStaleStorage: true, processing time: 2 msecs, invalidatedBlocks: 0 2023-07-22 07:10:21,969 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x19e0d563b5c4d636: Processing first storage report for DS-d5e4f1ef-f5da-4f19-a9eb-a9289adfcfe4 from datanode 242650d5-ed9b-4880-bf32-8e1e5bc4eed9 2023-07-22 07:10:21,969 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x19e0d563b5c4d636: from storage DS-d5e4f1ef-f5da-4f19-a9eb-a9289adfcfe4 node DatanodeRegistration(127.0.0.1:37967, datanodeUuid=242650d5-ed9b-4880-bf32-8e1e5bc4eed9, infoPort=37449, infoSecurePort=0, ipcPort=46507, storageInfo=lv=-57;cid=testClusterID;nsid=534840483;c=1690009819581), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-22 07:10:21,969 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x4a7667b3fd70a240: Processing first storage report for DS-d667cf8d-eebc-4ef1-951e-6ae73b21b74c from datanode d5e292e5-0ef3-4a74-8b1e-f2f2dde23363 2023-07-22 07:10:21,970 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x4a7667b3fd70a240: from storage DS-d667cf8d-eebc-4ef1-951e-6ae73b21b74c node DatanodeRegistration(127.0.0.1:40309, datanodeUuid=d5e292e5-0ef3-4a74-8b1e-f2f2dde23363, infoPort=35825, infoSecurePort=0, ipcPort=39705, storageInfo=lv=-57;cid=testClusterID;nsid=534840483;c=1690009819581), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-22 07:10:21,970 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x352ddedd683d350c: Processing first storage report for DS-8747d09c-1035-4c46-a0ce-aaced382941c from datanode ff2595ce-0bca-45d5-828a-c0e725ce9d4b 2023-07-22 07:10:21,970 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x352ddedd683d350c: from storage DS-8747d09c-1035-4c46-a0ce-aaced382941c node DatanodeRegistration(127.0.0.1:42555, datanodeUuid=ff2595ce-0bca-45d5-828a-c0e725ce9d4b, infoPort=43013, infoSecurePort=0, ipcPort=38761, storageInfo=lv=-57;cid=testClusterID;nsid=534840483;c=1690009819581), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-22 07:10:21,970 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x19e0d563b5c4d636: Processing first storage report for DS-72f0bc52-ea25-45b2-a36e-4b454d25505a from datanode 242650d5-ed9b-4880-bf32-8e1e5bc4eed9 2023-07-22 07:10:21,970 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x19e0d563b5c4d636: from storage DS-72f0bc52-ea25-45b2-a36e-4b454d25505a node DatanodeRegistration(127.0.0.1:37967, datanodeUuid=242650d5-ed9b-4880-bf32-8e1e5bc4eed9, infoPort=37449, infoSecurePort=0, ipcPort=46507, storageInfo=lv=-57;cid=testClusterID;nsid=534840483;c=1690009819581), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-22 07:10:21,970 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x4a7667b3fd70a240: Processing first storage report for DS-a93b39e2-f59f-4da6-b637-553a1d4a3e6f from datanode d5e292e5-0ef3-4a74-8b1e-f2f2dde23363 2023-07-22 07:10:21,970 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x4a7667b3fd70a240: from storage DS-a93b39e2-f59f-4da6-b637-553a1d4a3e6f node DatanodeRegistration(127.0.0.1:40309, datanodeUuid=d5e292e5-0ef3-4a74-8b1e-f2f2dde23363, infoPort=35825, infoSecurePort=0, ipcPort=39705, storageInfo=lv=-57;cid=testClusterID;nsid=534840483;c=1690009819581), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-22 07:10:22,214 DEBUG [Listener at localhost/46507] hbase.HBaseTestingUtility(649): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1e6bc595-35d7-277c-1036-cd06713ba4c7 2023-07-22 07:10:22,347 INFO [Listener at localhost/46507] zookeeper.MiniZooKeeperCluster(258): Started connectionTimeout=30000, dir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1e6bc595-35d7-277c-1036-cd06713ba4c7/cluster_00bb9b6e-43e0-f121-1ccb-023e0a721165/zookeeper_0, clientPort=56256, secureClientPort=-1, dataDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1e6bc595-35d7-277c-1036-cd06713ba4c7/cluster_00bb9b6e-43e0-f121-1ccb-023e0a721165/zookeeper_0/version-2, dataDirSize=424 dataLogDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1e6bc595-35d7-277c-1036-cd06713ba4c7/cluster_00bb9b6e-43e0-f121-1ccb-023e0a721165/zookeeper_0/version-2, dataLogSize=424 tickTime=2000, maxClientCnxns=300, minSessionTimeout=4000, maxSessionTimeout=40000, serverId=0 2023-07-22 07:10:22,366 INFO [Listener at localhost/46507] zookeeper.MiniZooKeeperCluster(283): Started MiniZooKeeperCluster and ran 'stat' on client port=56256 2023-07-22 07:10:22,380 INFO [Listener at localhost/46507] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-22 07:10:22,383 INFO [Listener at localhost/46507] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-22 07:10:23,088 INFO [Listener at localhost/46507] util.FSUtils(471): Created version file at hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666 with version=8 2023-07-22 07:10:23,089 INFO [Listener at localhost/46507] hbase.HBaseTestingUtility(1406): Setting hbase.fs.tmp.dir to hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/hbase-staging 2023-07-22 07:10:23,096 DEBUG [Listener at localhost/46507] hbase.LocalHBaseCluster(134): Setting Master Port to random. 2023-07-22 07:10:23,097 DEBUG [Listener at localhost/46507] hbase.LocalHBaseCluster(141): Setting RegionServer Port to random. 2023-07-22 07:10:23,097 DEBUG [Listener at localhost/46507] hbase.LocalHBaseCluster(151): Setting RS InfoServer Port to random. 2023-07-22 07:10:23,097 DEBUG [Listener at localhost/46507] hbase.LocalHBaseCluster(159): Setting Master InfoServer Port to random. 2023-07-22 07:10:23,445 INFO [Listener at localhost/46507] metrics.MetricRegistriesLoader(60): Loaded MetricRegistries class org.apache.hadoop.hbase.metrics.impl.MetricRegistriesImpl 2023-07-22 07:10:24,118 INFO [Listener at localhost/46507] client.ConnectionUtils(127): master/jenkins-hbase4:0 server-side Connection retries=45 2023-07-22 07:10:24,170 INFO [Listener at localhost/46507] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-22 07:10:24,171 INFO [Listener at localhost/46507] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-22 07:10:24,171 INFO [Listener at localhost/46507] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-22 07:10:24,172 INFO [Listener at localhost/46507] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-22 07:10:24,172 INFO [Listener at localhost/46507] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-22 07:10:24,334 INFO [Listener at localhost/46507] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.MasterService, hbase.pb.RegionServerStatusService, hbase.pb.LockService, hbase.pb.HbckService, hbase.pb.ClientMetaService, hbase.pb.ClientService, hbase.pb.AdminService 2023-07-22 07:10:24,425 DEBUG [Listener at localhost/46507] util.ClassSize(228): Using Unsafe to estimate memory layout 2023-07-22 07:10:24,524 INFO [Listener at localhost/46507] ipc.NettyRpcServer(120): Bind to /172.31.14.131:37061 2023-07-22 07:10:24,536 INFO [Listener at localhost/46507] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-22 07:10:24,539 INFO [Listener at localhost/46507] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-22 07:10:24,563 INFO [Listener at localhost/46507] zookeeper.RecoverableZooKeeper(93): Process identifier=master:37061 connecting to ZooKeeper ensemble=127.0.0.1:56256 2023-07-22 07:10:24,606 DEBUG [Listener at localhost/46507-EventThread] zookeeper.ZKWatcher(600): master:370610x0, quorum=127.0.0.1:56256, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-22 07:10:24,610 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): master:37061-0x1018bdde7740000 connected 2023-07-22 07:10:24,636 DEBUG [Listener at localhost/46507] zookeeper.ZKUtil(164): master:37061-0x1018bdde7740000, quorum=127.0.0.1:56256, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-22 07:10:24,637 DEBUG [Listener at localhost/46507] zookeeper.ZKUtil(164): master:37061-0x1018bdde7740000, quorum=127.0.0.1:56256, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-22 07:10:24,641 DEBUG [Listener at localhost/46507] zookeeper.ZKUtil(164): master:37061-0x1018bdde7740000, quorum=127.0.0.1:56256, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-22 07:10:24,650 DEBUG [Listener at localhost/46507] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=37061 2023-07-22 07:10:24,650 DEBUG [Listener at localhost/46507] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=37061 2023-07-22 07:10:24,650 DEBUG [Listener at localhost/46507] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=37061 2023-07-22 07:10:24,651 DEBUG [Listener at localhost/46507] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=37061 2023-07-22 07:10:24,651 DEBUG [Listener at localhost/46507] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=37061 2023-07-22 07:10:24,688 INFO [Listener at localhost/46507] log.Log(170): Logging initialized @7093ms to org.apache.hbase.thirdparty.org.eclipse.jetty.util.log.Slf4jLog 2023-07-22 07:10:24,821 INFO [Listener at localhost/46507] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-22 07:10:24,822 INFO [Listener at localhost/46507] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-22 07:10:24,823 INFO [Listener at localhost/46507] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-22 07:10:24,825 INFO [Listener at localhost/46507] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context master 2023-07-22 07:10:24,825 INFO [Listener at localhost/46507] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-22 07:10:24,825 INFO [Listener at localhost/46507] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-22 07:10:24,829 INFO [Listener at localhost/46507] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-22 07:10:24,899 INFO [Listener at localhost/46507] http.HttpServer(1146): Jetty bound to port 37185 2023-07-22 07:10:24,901 INFO [Listener at localhost/46507] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-22 07:10:24,948 INFO [Listener at localhost/46507] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-22 07:10:24,952 INFO [Listener at localhost/46507] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@adb65cd{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1e6bc595-35d7-277c-1036-cd06713ba4c7/hadoop.log.dir/,AVAILABLE} 2023-07-22 07:10:24,953 INFO [Listener at localhost/46507] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-22 07:10:24,954 INFO [Listener at localhost/46507] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@391456a{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-22 07:10:25,136 INFO [Listener at localhost/46507] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-22 07:10:25,149 INFO [Listener at localhost/46507] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-22 07:10:25,150 INFO [Listener at localhost/46507] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-22 07:10:25,152 INFO [Listener at localhost/46507] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-22 07:10:25,159 INFO [Listener at localhost/46507] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-22 07:10:25,189 INFO [Listener at localhost/46507] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@161bf7cb{master,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1e6bc595-35d7-277c-1036-cd06713ba4c7/java.io.tmpdir/jetty-0_0_0_0-37185-hbase-server-2_4_18-SNAPSHOT_jar-_-any-5107747861899284085/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/master} 2023-07-22 07:10:25,203 INFO [Listener at localhost/46507] server.AbstractConnector(333): Started ServerConnector@7b5904dd{HTTP/1.1, (http/1.1)}{0.0.0.0:37185} 2023-07-22 07:10:25,203 INFO [Listener at localhost/46507] server.Server(415): Started @7608ms 2023-07-22 07:10:25,207 INFO [Listener at localhost/46507] master.HMaster(444): hbase.rootdir=hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666, hbase.cluster.distributed=false 2023-07-22 07:10:25,284 INFO [Listener at localhost/46507] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-22 07:10:25,284 INFO [Listener at localhost/46507] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-22 07:10:25,284 INFO [Listener at localhost/46507] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-22 07:10:25,284 INFO [Listener at localhost/46507] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-22 07:10:25,284 INFO [Listener at localhost/46507] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-22 07:10:25,284 INFO [Listener at localhost/46507] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-22 07:10:25,290 INFO [Listener at localhost/46507] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-22 07:10:25,293 INFO [Listener at localhost/46507] ipc.NettyRpcServer(120): Bind to /172.31.14.131:34133 2023-07-22 07:10:25,295 INFO [Listener at localhost/46507] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-22 07:10:25,303 DEBUG [Listener at localhost/46507] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-22 07:10:25,304 INFO [Listener at localhost/46507] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-22 07:10:25,306 INFO [Listener at localhost/46507] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-22 07:10:25,308 INFO [Listener at localhost/46507] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:34133 connecting to ZooKeeper ensemble=127.0.0.1:56256 2023-07-22 07:10:25,312 DEBUG [Listener at localhost/46507-EventThread] zookeeper.ZKWatcher(600): regionserver:341330x0, quorum=127.0.0.1:56256, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-22 07:10:25,313 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:34133-0x1018bdde7740001 connected 2023-07-22 07:10:25,313 DEBUG [Listener at localhost/46507] zookeeper.ZKUtil(164): regionserver:34133-0x1018bdde7740001, quorum=127.0.0.1:56256, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-22 07:10:25,315 DEBUG [Listener at localhost/46507] zookeeper.ZKUtil(164): regionserver:34133-0x1018bdde7740001, quorum=127.0.0.1:56256, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-22 07:10:25,316 DEBUG [Listener at localhost/46507] zookeeper.ZKUtil(164): regionserver:34133-0x1018bdde7740001, quorum=127.0.0.1:56256, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-22 07:10:25,316 DEBUG [Listener at localhost/46507] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=34133 2023-07-22 07:10:25,317 DEBUG [Listener at localhost/46507] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=34133 2023-07-22 07:10:25,317 DEBUG [Listener at localhost/46507] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=34133 2023-07-22 07:10:25,317 DEBUG [Listener at localhost/46507] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=34133 2023-07-22 07:10:25,318 DEBUG [Listener at localhost/46507] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=34133 2023-07-22 07:10:25,320 INFO [Listener at localhost/46507] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-22 07:10:25,320 INFO [Listener at localhost/46507] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-22 07:10:25,320 INFO [Listener at localhost/46507] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-22 07:10:25,321 INFO [Listener at localhost/46507] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-22 07:10:25,321 INFO [Listener at localhost/46507] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-22 07:10:25,321 INFO [Listener at localhost/46507] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-22 07:10:25,322 INFO [Listener at localhost/46507] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-22 07:10:25,324 INFO [Listener at localhost/46507] http.HttpServer(1146): Jetty bound to port 46197 2023-07-22 07:10:25,324 INFO [Listener at localhost/46507] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-22 07:10:25,327 INFO [Listener at localhost/46507] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-22 07:10:25,328 INFO [Listener at localhost/46507] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@1c6f9d30{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1e6bc595-35d7-277c-1036-cd06713ba4c7/hadoop.log.dir/,AVAILABLE} 2023-07-22 07:10:25,328 INFO [Listener at localhost/46507] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-22 07:10:25,328 INFO [Listener at localhost/46507] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@5c80b18{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-22 07:10:25,455 INFO [Listener at localhost/46507] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-22 07:10:25,457 INFO [Listener at localhost/46507] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-22 07:10:25,457 INFO [Listener at localhost/46507] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-22 07:10:25,458 INFO [Listener at localhost/46507] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-22 07:10:25,459 INFO [Listener at localhost/46507] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-22 07:10:25,464 INFO [Listener at localhost/46507] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@40f2000c{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1e6bc595-35d7-277c-1036-cd06713ba4c7/java.io.tmpdir/jetty-0_0_0_0-46197-hbase-server-2_4_18-SNAPSHOT_jar-_-any-3641952706597345369/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-22 07:10:25,465 INFO [Listener at localhost/46507] server.AbstractConnector(333): Started ServerConnector@22a36f37{HTTP/1.1, (http/1.1)}{0.0.0.0:46197} 2023-07-22 07:10:25,466 INFO [Listener at localhost/46507] server.Server(415): Started @7871ms 2023-07-22 07:10:25,478 INFO [Listener at localhost/46507] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-22 07:10:25,478 INFO [Listener at localhost/46507] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-22 07:10:25,479 INFO [Listener at localhost/46507] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-22 07:10:25,479 INFO [Listener at localhost/46507] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-22 07:10:25,479 INFO [Listener at localhost/46507] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-22 07:10:25,479 INFO [Listener at localhost/46507] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-22 07:10:25,479 INFO [Listener at localhost/46507] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-22 07:10:25,481 INFO [Listener at localhost/46507] ipc.NettyRpcServer(120): Bind to /172.31.14.131:41787 2023-07-22 07:10:25,481 INFO [Listener at localhost/46507] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-22 07:10:25,482 DEBUG [Listener at localhost/46507] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-22 07:10:25,483 INFO [Listener at localhost/46507] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-22 07:10:25,484 INFO [Listener at localhost/46507] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-22 07:10:25,485 INFO [Listener at localhost/46507] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:41787 connecting to ZooKeeper ensemble=127.0.0.1:56256 2023-07-22 07:10:25,488 DEBUG [Listener at localhost/46507-EventThread] zookeeper.ZKWatcher(600): regionserver:417870x0, quorum=127.0.0.1:56256, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-22 07:10:25,490 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:41787-0x1018bdde7740002 connected 2023-07-22 07:10:25,490 DEBUG [Listener at localhost/46507] zookeeper.ZKUtil(164): regionserver:41787-0x1018bdde7740002, quorum=127.0.0.1:56256, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-22 07:10:25,491 DEBUG [Listener at localhost/46507] zookeeper.ZKUtil(164): regionserver:41787-0x1018bdde7740002, quorum=127.0.0.1:56256, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-22 07:10:25,491 DEBUG [Listener at localhost/46507] zookeeper.ZKUtil(164): regionserver:41787-0x1018bdde7740002, quorum=127.0.0.1:56256, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-22 07:10:25,492 DEBUG [Listener at localhost/46507] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=41787 2023-07-22 07:10:25,492 DEBUG [Listener at localhost/46507] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=41787 2023-07-22 07:10:25,492 DEBUG [Listener at localhost/46507] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=41787 2023-07-22 07:10:25,493 DEBUG [Listener at localhost/46507] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=41787 2023-07-22 07:10:25,493 DEBUG [Listener at localhost/46507] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=41787 2023-07-22 07:10:25,495 INFO [Listener at localhost/46507] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-22 07:10:25,495 INFO [Listener at localhost/46507] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-22 07:10:25,495 INFO [Listener at localhost/46507] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-22 07:10:25,496 INFO [Listener at localhost/46507] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-22 07:10:25,496 INFO [Listener at localhost/46507] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-22 07:10:25,496 INFO [Listener at localhost/46507] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-22 07:10:25,496 INFO [Listener at localhost/46507] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-22 07:10:25,497 INFO [Listener at localhost/46507] http.HttpServer(1146): Jetty bound to port 46223 2023-07-22 07:10:25,497 INFO [Listener at localhost/46507] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-22 07:10:25,500 INFO [Listener at localhost/46507] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-22 07:10:25,501 INFO [Listener at localhost/46507] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@64f476b1{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1e6bc595-35d7-277c-1036-cd06713ba4c7/hadoop.log.dir/,AVAILABLE} 2023-07-22 07:10:25,501 INFO [Listener at localhost/46507] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-22 07:10:25,501 INFO [Listener at localhost/46507] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@46770358{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-22 07:10:25,620 INFO [Listener at localhost/46507] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-22 07:10:25,621 INFO [Listener at localhost/46507] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-22 07:10:25,621 INFO [Listener at localhost/46507] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-22 07:10:25,621 INFO [Listener at localhost/46507] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-22 07:10:25,622 INFO [Listener at localhost/46507] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-22 07:10:25,623 INFO [Listener at localhost/46507] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@67816748{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1e6bc595-35d7-277c-1036-cd06713ba4c7/java.io.tmpdir/jetty-0_0_0_0-46223-hbase-server-2_4_18-SNAPSHOT_jar-_-any-1453295634574270029/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-22 07:10:25,624 INFO [Listener at localhost/46507] server.AbstractConnector(333): Started ServerConnector@1ba8dae2{HTTP/1.1, (http/1.1)}{0.0.0.0:46223} 2023-07-22 07:10:25,625 INFO [Listener at localhost/46507] server.Server(415): Started @8030ms 2023-07-22 07:10:25,637 INFO [Listener at localhost/46507] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-22 07:10:25,637 INFO [Listener at localhost/46507] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-22 07:10:25,637 INFO [Listener at localhost/46507] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-22 07:10:25,637 INFO [Listener at localhost/46507] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-22 07:10:25,638 INFO [Listener at localhost/46507] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-22 07:10:25,638 INFO [Listener at localhost/46507] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-22 07:10:25,638 INFO [Listener at localhost/46507] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-22 07:10:25,639 INFO [Listener at localhost/46507] ipc.NettyRpcServer(120): Bind to /172.31.14.131:39057 2023-07-22 07:10:25,640 INFO [Listener at localhost/46507] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-22 07:10:25,641 DEBUG [Listener at localhost/46507] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-22 07:10:25,642 INFO [Listener at localhost/46507] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-22 07:10:25,643 INFO [Listener at localhost/46507] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-22 07:10:25,644 INFO [Listener at localhost/46507] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:39057 connecting to ZooKeeper ensemble=127.0.0.1:56256 2023-07-22 07:10:25,648 DEBUG [Listener at localhost/46507-EventThread] zookeeper.ZKWatcher(600): regionserver:390570x0, quorum=127.0.0.1:56256, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-22 07:10:25,649 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:39057-0x1018bdde7740003 connected 2023-07-22 07:10:25,650 DEBUG [Listener at localhost/46507] zookeeper.ZKUtil(164): regionserver:39057-0x1018bdde7740003, quorum=127.0.0.1:56256, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-22 07:10:25,650 DEBUG [Listener at localhost/46507] zookeeper.ZKUtil(164): regionserver:39057-0x1018bdde7740003, quorum=127.0.0.1:56256, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-22 07:10:25,651 DEBUG [Listener at localhost/46507] zookeeper.ZKUtil(164): regionserver:39057-0x1018bdde7740003, quorum=127.0.0.1:56256, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-22 07:10:25,652 DEBUG [Listener at localhost/46507] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=39057 2023-07-22 07:10:25,652 DEBUG [Listener at localhost/46507] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=39057 2023-07-22 07:10:25,652 DEBUG [Listener at localhost/46507] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=39057 2023-07-22 07:10:25,653 DEBUG [Listener at localhost/46507] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=39057 2023-07-22 07:10:25,653 DEBUG [Listener at localhost/46507] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=39057 2023-07-22 07:10:25,656 INFO [Listener at localhost/46507] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-22 07:10:25,656 INFO [Listener at localhost/46507] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-22 07:10:25,656 INFO [Listener at localhost/46507] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-22 07:10:25,656 INFO [Listener at localhost/46507] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-22 07:10:25,657 INFO [Listener at localhost/46507] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-22 07:10:25,657 INFO [Listener at localhost/46507] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-22 07:10:25,657 INFO [Listener at localhost/46507] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-22 07:10:25,658 INFO [Listener at localhost/46507] http.HttpServer(1146): Jetty bound to port 40801 2023-07-22 07:10:25,658 INFO [Listener at localhost/46507] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-22 07:10:25,659 INFO [Listener at localhost/46507] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-22 07:10:25,660 INFO [Listener at localhost/46507] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@145f3cb8{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1e6bc595-35d7-277c-1036-cd06713ba4c7/hadoop.log.dir/,AVAILABLE} 2023-07-22 07:10:25,660 INFO [Listener at localhost/46507] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-22 07:10:25,660 INFO [Listener at localhost/46507] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@7431440f{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-22 07:10:25,783 INFO [Listener at localhost/46507] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-22 07:10:25,784 INFO [Listener at localhost/46507] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-22 07:10:25,784 INFO [Listener at localhost/46507] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-22 07:10:25,784 INFO [Listener at localhost/46507] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-22 07:10:25,786 INFO [Listener at localhost/46507] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-22 07:10:25,787 INFO [Listener at localhost/46507] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@38a30dd7{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1e6bc595-35d7-277c-1036-cd06713ba4c7/java.io.tmpdir/jetty-0_0_0_0-40801-hbase-server-2_4_18-SNAPSHOT_jar-_-any-1940602251158668916/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-22 07:10:25,788 INFO [Listener at localhost/46507] server.AbstractConnector(333): Started ServerConnector@16240f3c{HTTP/1.1, (http/1.1)}{0.0.0.0:40801} 2023-07-22 07:10:25,788 INFO [Listener at localhost/46507] server.Server(415): Started @8194ms 2023-07-22 07:10:25,794 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-22 07:10:25,813 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.AbstractConnector(333): Started ServerConnector@75b41500{HTTP/1.1, (http/1.1)}{0.0.0.0:37933} 2023-07-22 07:10:25,814 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.Server(415): Started @8219ms 2023-07-22 07:10:25,814 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2168): Adding backup master ZNode /hbase/backup-masters/jenkins-hbase4.apache.org,37061,1690009823266 2023-07-22 07:10:25,825 DEBUG [Listener at localhost/46507-EventThread] zookeeper.ZKWatcher(600): master:37061-0x1018bdde7740000, quorum=127.0.0.1:56256, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-07-22 07:10:25,827 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:37061-0x1018bdde7740000, quorum=127.0.0.1:56256, baseZNode=/hbase Set watcher on existing znode=/hbase/backup-masters/jenkins-hbase4.apache.org,37061,1690009823266 2023-07-22 07:10:25,848 DEBUG [Listener at localhost/46507-EventThread] zookeeper.ZKWatcher(600): master:37061-0x1018bdde7740000, quorum=127.0.0.1:56256, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-22 07:10:25,848 DEBUG [Listener at localhost/46507-EventThread] zookeeper.ZKWatcher(600): regionserver:39057-0x1018bdde7740003, quorum=127.0.0.1:56256, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-22 07:10:25,848 DEBUG [Listener at localhost/46507-EventThread] zookeeper.ZKWatcher(600): regionserver:41787-0x1018bdde7740002, quorum=127.0.0.1:56256, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-22 07:10:25,848 DEBUG [Listener at localhost/46507-EventThread] zookeeper.ZKWatcher(600): regionserver:34133-0x1018bdde7740001, quorum=127.0.0.1:56256, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-22 07:10:25,848 DEBUG [Listener at localhost/46507-EventThread] zookeeper.ZKWatcher(600): master:37061-0x1018bdde7740000, quorum=127.0.0.1:56256, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-22 07:10:25,850 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:37061-0x1018bdde7740000, quorum=127.0.0.1:56256, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-22 07:10:25,852 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): master:37061-0x1018bdde7740000, quorum=127.0.0.1:56256, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-22 07:10:25,853 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(227): Deleting ZNode for /hbase/backup-masters/jenkins-hbase4.apache.org,37061,1690009823266 from backup master directory 2023-07-22 07:10:25,856 DEBUG [Listener at localhost/46507-EventThread] zookeeper.ZKWatcher(600): master:37061-0x1018bdde7740000, quorum=127.0.0.1:56256, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/backup-masters/jenkins-hbase4.apache.org,37061,1690009823266 2023-07-22 07:10:25,857 DEBUG [Listener at localhost/46507-EventThread] zookeeper.ZKWatcher(600): master:37061-0x1018bdde7740000, quorum=127.0.0.1:56256, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-07-22 07:10:25,857 WARN [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-22 07:10:25,857 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(237): Registered as active master=jenkins-hbase4.apache.org,37061,1690009823266 2023-07-22 07:10:25,861 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.ChunkCreator(497): Allocating data MemStoreChunkPool with chunk size 2 MB, max count 352, initial count 0 2023-07-22 07:10:25,862 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.ChunkCreator(497): Allocating index MemStoreChunkPool with chunk size 204.80 KB, max count 391, initial count 0 2023-07-22 07:10:25,959 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] util.FSUtils(620): Created cluster ID file at hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/hbase.id with ID: d8ac43d8-c035-4739-ae09-79d2c1778afb 2023-07-22 07:10:26,003 INFO [master/jenkins-hbase4:0:becomeActiveMaster] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-22 07:10:26,019 DEBUG [Listener at localhost/46507-EventThread] zookeeper.ZKWatcher(600): master:37061-0x1018bdde7740000, quorum=127.0.0.1:56256, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-22 07:10:26,071 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ReadOnlyZKClient(139): Connect 0x653060c4 to 127.0.0.1:56256 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-22 07:10:26,096 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@3992feb1, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-22 07:10:26,122 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(309): Create or load local region for table 'master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-22 07:10:26,125 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(132): Injected flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000 2023-07-22 07:10:26,150 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] asyncfs.FanOutOneBlockAsyncDFSOutputHelper(264): ClientProtocol::create wrong number of arguments, should be hadoop 3.2 or below 2023-07-22 07:10:26,150 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] asyncfs.FanOutOneBlockAsyncDFSOutputHelper(270): ClientProtocol::create wrong number of arguments, should be hadoop 2.x 2023-07-22 07:10:26,153 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] asyncfs.FanOutOneBlockAsyncDFSOutputHelper(279): can not find SHOULD_REPLICATE flag, should be hadoop 2.x java.lang.IllegalArgumentException: No enum constant org.apache.hadoop.fs.CreateFlag.SHOULD_REPLICATE at java.lang.Enum.valueOf(Enum.java:238) at org.apache.hadoop.fs.CreateFlag.valueOf(CreateFlag.java:63) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.loadShouldReplicateFlag(FanOutOneBlockAsyncDFSOutputHelper.java:277) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.(FanOutOneBlockAsyncDFSOutputHelper.java:304) at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Class.java:264) at org.apache.hadoop.hbase.wal.AsyncFSWALProvider.load(AsyncFSWALProvider.java:139) at org.apache.hadoop.hbase.wal.WALFactory.getProviderClass(WALFactory.java:135) at org.apache.hadoop.hbase.wal.WALFactory.getProvider(WALFactory.java:175) at org.apache.hadoop.hbase.wal.WALFactory.(WALFactory.java:202) at org.apache.hadoop.hbase.wal.WALFactory.(WALFactory.java:182) at org.apache.hadoop.hbase.master.region.MasterRegion.create(MasterRegion.java:339) at org.apache.hadoop.hbase.master.region.MasterRegionFactory.create(MasterRegionFactory.java:104) at org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:855) at org.apache.hadoop.hbase.master.HMaster.startActiveMasterManager(HMaster.java:2193) at org.apache.hadoop.hbase.master.HMaster.lambda$run$0(HMaster.java:528) at java.lang.Thread.run(Thread.java:750) 2023-07-22 07:10:26,158 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(243): No decryptEncryptedDataEncryptionKey method in DFSClient, should be hadoop version with HDFS-12396 java.lang.NoSuchMethodException: org.apache.hadoop.hdfs.DFSClient.decryptEncryptedDataEncryptionKey(org.apache.hadoop.fs.FileEncryptionInfo) at java.lang.Class.getDeclaredMethod(Class.java:2130) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper.createTransparentCryptoHelperWithoutHDFS12396(FanOutOneBlockAsyncDFSOutputSaslHelper.java:182) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper.createTransparentCryptoHelper(FanOutOneBlockAsyncDFSOutputSaslHelper.java:241) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper.(FanOutOneBlockAsyncDFSOutputSaslHelper.java:252) at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Class.java:264) at org.apache.hadoop.hbase.wal.AsyncFSWALProvider.load(AsyncFSWALProvider.java:140) at org.apache.hadoop.hbase.wal.WALFactory.getProviderClass(WALFactory.java:135) at org.apache.hadoop.hbase.wal.WALFactory.getProvider(WALFactory.java:175) at org.apache.hadoop.hbase.wal.WALFactory.(WALFactory.java:202) at org.apache.hadoop.hbase.wal.WALFactory.(WALFactory.java:182) at org.apache.hadoop.hbase.master.region.MasterRegion.create(MasterRegion.java:339) at org.apache.hadoop.hbase.master.region.MasterRegionFactory.create(MasterRegionFactory.java:104) at org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:855) at org.apache.hadoop.hbase.master.HMaster.startActiveMasterManager(HMaster.java:2193) at org.apache.hadoop.hbase.master.HMaster.lambda$run$0(HMaster.java:528) at java.lang.Thread.run(Thread.java:750) 2023-07-22 07:10:26,159 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-22 07:10:26,209 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7693): Creating {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, under table dir hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/MasterData/data/master/store-tmp 2023-07-22 07:10:26,258 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-22 07:10:26,258 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-07-22 07:10:26,259 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-22 07:10:26,259 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-22 07:10:26,259 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-07-22 07:10:26,259 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-22 07:10:26,259 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-22 07:10:26,259 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-22 07:10:26,260 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(191): WALDir=hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/MasterData/WALs/jenkins-hbase4.apache.org,37061,1690009823266 2023-07-22 07:10:26,287 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C37061%2C1690009823266, suffix=, logDir=hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/MasterData/WALs/jenkins-hbase4.apache.org,37061,1690009823266, archiveDir=hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/MasterData/oldWALs, maxLogs=10 2023-07-22 07:10:26,362 DEBUG [RS-EventLoopGroup-5-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:40309,DS-d667cf8d-eebc-4ef1-951e-6ae73b21b74c,DISK] 2023-07-22 07:10:26,362 DEBUG [RS-EventLoopGroup-5-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:42555,DS-3ac6edaa-2267-467e-8da8-ff002dee7b14,DISK] 2023-07-22 07:10:26,362 DEBUG [RS-EventLoopGroup-5-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:37967,DS-d5e4f1ef-f5da-4f19-a9eb-a9289adfcfe4,DISK] 2023-07-22 07:10:26,371 DEBUG [RS-EventLoopGroup-5-3] asyncfs.ProtobufDecoder(123): Hadoop 3.2 and below use unshaded protobuf. java.lang.ClassNotFoundException: org.apache.hadoop.thirdparty.protobuf.MessageLite at java.net.URLClassLoader.findClass(URLClassLoader.java:387) at java.lang.ClassLoader.loadClass(ClassLoader.java:418) at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:352) at java.lang.ClassLoader.loadClass(ClassLoader.java:351) at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Class.java:264) at org.apache.hadoop.hbase.io.asyncfs.ProtobufDecoder.(ProtobufDecoder.java:118) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.processWriteBlockResponse(FanOutOneBlockAsyncDFSOutputHelper.java:340) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.access$100(FanOutOneBlockAsyncDFSOutputHelper.java:112) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper$4.operationComplete(FanOutOneBlockAsyncDFSOutputHelper.java:424) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:590) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:557) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:492) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.addListener(DefaultPromise.java:185) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.initialize(FanOutOneBlockAsyncDFSOutputHelper.java:418) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.access$300(FanOutOneBlockAsyncDFSOutputHelper.java:112) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper$5.operationComplete(FanOutOneBlockAsyncDFSOutputHelper.java:476) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper$5.operationComplete(FanOutOneBlockAsyncDFSOutputHelper.java:471) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:590) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListeners0(DefaultPromise.java:583) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:559) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:492) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.setValue0(DefaultPromise.java:636) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.setSuccess0(DefaultPromise.java:625) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.trySuccess(DefaultPromise.java:105) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPromise.trySuccess(DefaultChannelPromise.java:84) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.fulfillConnectPromise(AbstractEpollChannel.java:653) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.finishConnect(AbstractEpollChannel.java:691) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.epollOutReady(AbstractEpollChannel.java:567) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:489) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:397) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) at java.lang.Thread.run(Thread.java:750) 2023-07-22 07:10:26,459 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/MasterData/WALs/jenkins-hbase4.apache.org,37061,1690009823266/jenkins-hbase4.apache.org%2C37061%2C1690009823266.1690009826302 2023-07-22 07:10:26,461 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:42555,DS-3ac6edaa-2267-467e-8da8-ff002dee7b14,DISK], DatanodeInfoWithStorage[127.0.0.1:37967,DS-d5e4f1ef-f5da-4f19-a9eb-a9289adfcfe4,DISK], DatanodeInfoWithStorage[127.0.0.1:40309,DS-d667cf8d-eebc-4ef1-951e-6ae73b21b74c,DISK]] 2023-07-22 07:10:26,462 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7854): Opening region: {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''} 2023-07-22 07:10:26,462 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-22 07:10:26,467 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7894): checking encryption for 1595e783b53d99cd5eef43b6debb2682 2023-07-22 07:10:26,469 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7897): checking classloading for 1595e783b53d99cd5eef43b6debb2682 2023-07-22 07:10:26,554 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family proc of region 1595e783b53d99cd5eef43b6debb2682 2023-07-22 07:10:26,563 DEBUG [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc 2023-07-22 07:10:26,596 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1595e783b53d99cd5eef43b6debb2682 columnFamilyName proc 2023-07-22 07:10:26,609 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(310): Store=1595e783b53d99cd5eef43b6debb2682/proc, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-22 07:10:26,614 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-07-22 07:10:26,616 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-07-22 07:10:26,632 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1055): writing seq id for 1595e783b53d99cd5eef43b6debb2682 2023-07-22 07:10:26,636 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-22 07:10:26,638 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1072): Opened 1595e783b53d99cd5eef43b6debb2682; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10124784160, jitterRate=-0.0570559948682785}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-22 07:10:26,638 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(965): Region open journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-22 07:10:26,639 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(122): Constructor flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000, compactMin=4 2023-07-22 07:10:26,663 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(104): Starting the Region Procedure Store, number threads=5 2023-07-22 07:10:26,664 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(562): Starting 5 core workers (bigger of cpus/4 or 16) with max (burst) worker count=50 2023-07-22 07:10:26,667 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(255): Starting Region Procedure Store lease recovery... 2023-07-22 07:10:26,670 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(582): Recovered RegionProcedureStore lease in 1 msec 2023-07-22 07:10:26,720 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(596): Loaded RegionProcedureStore in 50 msec 2023-07-22 07:10:26,721 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.RemoteProcedureDispatcher(96): Instantiated, coreThreads=3 (allowCoreThreadTimeOut=true), queueMaxSize=32, operationDelay=150 2023-07-22 07:10:26,763 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(253): hbase:meta replica znodes: [] 2023-07-22 07:10:26,769 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.RegionServerTracker(124): Starting RegionServerTracker; 0 have existing ServerCrashProcedures, 0 possibly 'live' servers, and 0 'splitting'. 2023-07-22 07:10:26,779 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:37061-0x1018bdde7740000, quorum=127.0.0.1:56256, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/balancer 2023-07-22 07:10:26,786 INFO [master/jenkins-hbase4:0:becomeActiveMaster] normalizer.RegionNormalizerWorker(118): Normalizer rate limit set to unlimited 2023-07-22 07:10:26,792 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:37061-0x1018bdde7740000, quorum=127.0.0.1:56256, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/normalizer 2023-07-22 07:10:26,795 DEBUG [Listener at localhost/46507-EventThread] zookeeper.ZKWatcher(600): master:37061-0x1018bdde7740000, quorum=127.0.0.1:56256, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-22 07:10:26,798 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:37061-0x1018bdde7740000, quorum=127.0.0.1:56256, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/split 2023-07-22 07:10:26,799 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:37061-0x1018bdde7740000, quorum=127.0.0.1:56256, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/merge 2023-07-22 07:10:26,820 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:37061-0x1018bdde7740000, quorum=127.0.0.1:56256, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/snapshot-cleanup 2023-07-22 07:10:26,826 DEBUG [Listener at localhost/46507-EventThread] zookeeper.ZKWatcher(600): regionserver:41787-0x1018bdde7740002, quorum=127.0.0.1:56256, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-22 07:10:26,826 DEBUG [Listener at localhost/46507-EventThread] zookeeper.ZKWatcher(600): regionserver:34133-0x1018bdde7740001, quorum=127.0.0.1:56256, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-22 07:10:26,826 DEBUG [Listener at localhost/46507-EventThread] zookeeper.ZKWatcher(600): regionserver:39057-0x1018bdde7740003, quorum=127.0.0.1:56256, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-22 07:10:26,826 DEBUG [Listener at localhost/46507-EventThread] zookeeper.ZKWatcher(600): master:37061-0x1018bdde7740000, quorum=127.0.0.1:56256, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-22 07:10:26,827 DEBUG [Listener at localhost/46507-EventThread] zookeeper.ZKWatcher(600): master:37061-0x1018bdde7740000, quorum=127.0.0.1:56256, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-22 07:10:26,827 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(744): Active/primary master=jenkins-hbase4.apache.org,37061,1690009823266, sessionid=0x1018bdde7740000, setting cluster-up flag (Was=false) 2023-07-22 07:10:26,850 DEBUG [Listener at localhost/46507-EventThread] zookeeper.ZKWatcher(600): master:37061-0x1018bdde7740000, quorum=127.0.0.1:56256, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-22 07:10:26,862 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/flush-table-proc/acquired, /hbase/flush-table-proc/reached, /hbase/flush-table-proc/abort 2023-07-22 07:10:26,864 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,37061,1690009823266 2023-07-22 07:10:26,869 DEBUG [Listener at localhost/46507-EventThread] zookeeper.ZKWatcher(600): master:37061-0x1018bdde7740000, quorum=127.0.0.1:56256, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-22 07:10:26,874 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/online-snapshot/acquired, /hbase/online-snapshot/reached, /hbase/online-snapshot/abort 2023-07-22 07:10:26,876 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,37061,1690009823266 2023-07-22 07:10:26,878 WARN [master/jenkins-hbase4:0:becomeActiveMaster] snapshot.SnapshotManager(302): Couldn't delete working snapshot directory: hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/.hbase-snapshot/.tmp 2023-07-22 07:10:26,892 INFO [RS:2;jenkins-hbase4:39057] regionserver.HRegionServer(951): ClusterId : d8ac43d8-c035-4739-ae09-79d2c1778afb 2023-07-22 07:10:26,892 INFO [RS:0;jenkins-hbase4:34133] regionserver.HRegionServer(951): ClusterId : d8ac43d8-c035-4739-ae09-79d2c1778afb 2023-07-22 07:10:26,892 INFO [RS:1;jenkins-hbase4:41787] regionserver.HRegionServer(951): ClusterId : d8ac43d8-c035-4739-ae09-79d2c1778afb 2023-07-22 07:10:26,901 DEBUG [RS:1;jenkins-hbase4:41787] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-22 07:10:26,901 DEBUG [RS:0;jenkins-hbase4:34133] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-22 07:10:26,901 DEBUG [RS:2;jenkins-hbase4:39057] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-22 07:10:26,908 DEBUG [RS:2;jenkins-hbase4:39057] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-22 07:10:26,908 DEBUG [RS:1;jenkins-hbase4:41787] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-22 07:10:26,908 DEBUG [RS:0;jenkins-hbase4:34133] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-22 07:10:26,908 DEBUG [RS:1;jenkins-hbase4:41787] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-22 07:10:26,908 DEBUG [RS:2;jenkins-hbase4:39057] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-22 07:10:26,908 DEBUG [RS:0;jenkins-hbase4:34133] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-22 07:10:26,915 DEBUG [RS:0;jenkins-hbase4:34133] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-22 07:10:26,915 DEBUG [RS:2;jenkins-hbase4:39057] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-22 07:10:26,915 DEBUG [RS:1;jenkins-hbase4:41787] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-22 07:10:26,917 DEBUG [RS:0;jenkins-hbase4:34133] zookeeper.ReadOnlyZKClient(139): Connect 0x1406d513 to 127.0.0.1:56256 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-22 07:10:26,917 DEBUG [RS:2;jenkins-hbase4:39057] zookeeper.ReadOnlyZKClient(139): Connect 0x3626d634 to 127.0.0.1:56256 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-22 07:10:26,921 DEBUG [RS:1;jenkins-hbase4:41787] zookeeper.ReadOnlyZKClient(139): Connect 0x7c415f8f to 127.0.0.1:56256 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-22 07:10:26,944 DEBUG [RS:0;jenkins-hbase4:34133] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@31fcf575, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-22 07:10:26,945 DEBUG [RS:0;jenkins-hbase4:34133] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@61e23a7a, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-22 07:10:26,953 DEBUG [RS:2;jenkins-hbase4:39057] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@73244e94, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-22 07:10:26,953 DEBUG [RS:2;jenkins-hbase4:39057] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@3ad9f929, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-22 07:10:26,959 DEBUG [RS:1;jenkins-hbase4:41787] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@7110577f, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-22 07:10:26,959 DEBUG [RS:1;jenkins-hbase4:41787] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@12c9a5fa, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-22 07:10:26,980 DEBUG [RS:1;jenkins-hbase4:41787] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:1;jenkins-hbase4:41787 2023-07-22 07:10:26,981 DEBUG [RS:2;jenkins-hbase4:39057] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:2;jenkins-hbase4:39057 2023-07-22 07:10:26,983 DEBUG [RS:0;jenkins-hbase4:34133] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:0;jenkins-hbase4:34133 2023-07-22 07:10:26,988 INFO [RS:2;jenkins-hbase4:39057] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-22 07:10:26,988 INFO [RS:1;jenkins-hbase4:41787] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-22 07:10:26,988 INFO [RS:1;jenkins-hbase4:41787] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-22 07:10:26,988 INFO [RS:0;jenkins-hbase4:34133] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-22 07:10:26,989 DEBUG [RS:1;jenkins-hbase4:41787] regionserver.HRegionServer(1022): About to register with Master. 2023-07-22 07:10:26,988 INFO [RS:2;jenkins-hbase4:39057] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-22 07:10:26,989 INFO [RS:0;jenkins-hbase4:34133] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-22 07:10:26,989 DEBUG [RS:2;jenkins-hbase4:39057] regionserver.HRegionServer(1022): About to register with Master. 2023-07-22 07:10:26,989 DEBUG [RS:0;jenkins-hbase4:34133] regionserver.HRegionServer(1022): About to register with Master. 2023-07-22 07:10:26,993 INFO [RS:2;jenkins-hbase4:39057] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,37061,1690009823266 with isa=jenkins-hbase4.apache.org/172.31.14.131:39057, startcode=1690009825637 2023-07-22 07:10:26,993 INFO [RS:0;jenkins-hbase4:34133] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,37061,1690009823266 with isa=jenkins-hbase4.apache.org/172.31.14.131:34133, startcode=1690009825283 2023-07-22 07:10:26,993 INFO [RS:1;jenkins-hbase4:41787] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,37061,1690009823266 with isa=jenkins-hbase4.apache.org/172.31.14.131:41787, startcode=1690009825478 2023-07-22 07:10:27,006 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2930): Registered master coprocessor service: service=RSGroupAdminService 2023-07-22 07:10:27,020 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] rsgroup.RSGroupInfoManagerImpl(537): Refreshing in Offline mode. 2023-07-22 07:10:27,021 DEBUG [RS:0;jenkins-hbase4:34133] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-22 07:10:27,022 DEBUG [RS:2;jenkins-hbase4:39057] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-22 07:10:27,021 DEBUG [RS:1;jenkins-hbase4:41787] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-22 07:10:27,023 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,37061,1690009823266] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-22 07:10:27,025 INFO [master/jenkins-hbase4:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint loaded, priority=536870911. 2023-07-22 07:10:27,026 INFO [master/jenkins-hbase4:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver loaded, priority=536870912. 2023-07-22 07:10:27,155 INFO [RS-EventLoopGroup-1-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:37051, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.0 (auth:SIMPLE), service=RegionServerStatusService 2023-07-22 07:10:27,155 INFO [RS-EventLoopGroup-1-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:50645, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.2 (auth:SIMPLE), service=RegionServerStatusService 2023-07-22 07:10:27,155 INFO [RS-EventLoopGroup-1-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:60575, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.1 (auth:SIMPLE), service=RegionServerStatusService 2023-07-22 07:10:27,167 DEBUG [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=37061] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.ipc.ServerNotRunningYetException: Server is not running yet at org.apache.hadoop.hbase.master.HMaster.checkServiceStarted(HMaster.java:2832) at org.apache.hadoop.hbase.master.MasterRpcServices.regionServerStartup(MasterRpcServices.java:579) at org.apache.hadoop.hbase.shaded.protobuf.generated.RegionServerStatusProtos$RegionServerStatusService$2.callBlockingMethod(RegionServerStatusProtos.java:15952) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-22 07:10:27,180 DEBUG [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=37061] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.ipc.ServerNotRunningYetException: Server is not running yet at org.apache.hadoop.hbase.master.HMaster.checkServiceStarted(HMaster.java:2832) at org.apache.hadoop.hbase.master.MasterRpcServices.regionServerStartup(MasterRpcServices.java:579) at org.apache.hadoop.hbase.shaded.protobuf.generated.RegionServerStatusProtos$RegionServerStatusService$2.callBlockingMethod(RegionServerStatusProtos.java:15952) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-22 07:10:27,181 DEBUG [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=37061] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.ipc.ServerNotRunningYetException: Server is not running yet at org.apache.hadoop.hbase.master.HMaster.checkServiceStarted(HMaster.java:2832) at org.apache.hadoop.hbase.master.MasterRpcServices.regionServerStartup(MasterRpcServices.java:579) at org.apache.hadoop.hbase.shaded.protobuf.generated.RegionServerStatusProtos$RegionServerStatusService$2.callBlockingMethod(RegionServerStatusProtos.java:15952) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-22 07:10:27,184 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT; InitMetaProcedure table=hbase:meta 2023-07-22 07:10:27,219 DEBUG [RS:2;jenkins-hbase4:39057] regionserver.HRegionServer(2830): Master is not running yet 2023-07-22 07:10:27,219 DEBUG [RS:0;jenkins-hbase4:34133] regionserver.HRegionServer(2830): Master is not running yet 2023-07-22 07:10:27,220 WARN [RS:2;jenkins-hbase4:39057] regionserver.HRegionServer(1030): reportForDuty failed; sleeping 100 ms and then retrying. 2023-07-22 07:10:27,220 WARN [RS:0;jenkins-hbase4:34133] regionserver.HRegionServer(1030): reportForDuty failed; sleeping 100 ms and then retrying. 2023-07-22 07:10:27,220 DEBUG [RS:1;jenkins-hbase4:41787] regionserver.HRegionServer(2830): Master is not running yet 2023-07-22 07:10:27,220 WARN [RS:1;jenkins-hbase4:41787] regionserver.HRegionServer(1030): reportForDuty failed; sleeping 100 ms and then retrying. 2023-07-22 07:10:27,248 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-07-22 07:10:27,253 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-07-22 07:10:27,254 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-07-22 07:10:27,254 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-07-22 07:10:27,256 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_OPEN_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-22 07:10:27,256 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_CLOSE_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-22 07:10:27,256 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-22 07:10:27,257 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_META_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-22 07:10:27,257 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=M_LOG_REPLAY_OPS-master/jenkins-hbase4:0, corePoolSize=10, maxPoolSize=10 2023-07-22 07:10:27,257 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-22 07:10:27,257 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_MERGE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-22 07:10:27,257 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_TABLE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-22 07:10:27,270 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner; timeout=30000, timestamp=1690009857270 2023-07-22 07:10:27,274 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): log_cleaner Cleaner pool size is 1 2023-07-22 07:10:27,278 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT, locked=true; InitMetaProcedure table=hbase:meta 2023-07-22 07:10:27,279 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveLogCleaner 2023-07-22 07:10:27,280 INFO [PEWorker-1] procedure.InitMetaProcedure(71): BOOTSTRAP: creating hbase:meta region 2023-07-22 07:10:27,283 INFO [PEWorker-1] util.FSTableDescriptors(128): Creating new hbase:meta table descriptor 'hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-07-22 07:10:27,290 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.replication.master.ReplicationLogCleaner 2023-07-22 07:10:27,291 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreWALCleaner 2023-07-22 07:10:27,291 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveProcedureWALCleaner 2023-07-22 07:10:27,292 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.LogCleaner(148): Creating 1 old WALs cleaner threads 2023-07-22 07:10:27,292 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=LogsCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-07-22 07:10:27,296 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): hfile_cleaner Cleaner pool size is 2 2023-07-22 07:10:27,298 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreHFileCleaner 2023-07-22 07:10:27,298 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.HFileLinkCleaner 2023-07-22 07:10:27,303 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.snapshot.SnapshotHFileCleaner 2023-07-22 07:10:27,303 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveHFileCleaner 2023-07-22 07:10:27,311 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(242): Starting for large file=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1690009827306,5,FailOnTimeoutGroup] 2023-07-22 07:10:27,311 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(257): Starting for small files=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1690009827311,5,FailOnTimeoutGroup] 2023-07-22 07:10:27,311 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HFileCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-07-22 07:10:27,312 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1461): Reopening regions with very high storeFileRefCount is disabled. Provide threshold value > 0 for hbase.regions.recovery.store.file.ref.count to enable it. 2023-07-22 07:10:27,314 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=ReplicationBarrierCleaner, period=43200000, unit=MILLISECONDS is enabled. 2023-07-22 07:10:27,314 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=SnapshotCleaner, period=1800000, unit=MILLISECONDS is enabled. 2023-07-22 07:10:27,321 INFO [RS:2;jenkins-hbase4:39057] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,37061,1690009823266 with isa=jenkins-hbase4.apache.org/172.31.14.131:39057, startcode=1690009825637 2023-07-22 07:10:27,323 INFO [RS:1;jenkins-hbase4:41787] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,37061,1690009823266 with isa=jenkins-hbase4.apache.org/172.31.14.131:41787, startcode=1690009825478 2023-07-22 07:10:27,323 INFO [RS:0;jenkins-hbase4:34133] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,37061,1690009823266 with isa=jenkins-hbase4.apache.org/172.31.14.131:34133, startcode=1690009825283 2023-07-22 07:10:27,332 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=37061] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,41787,1690009825478 2023-07-22 07:10:27,333 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,37061,1690009823266] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-22 07:10:27,334 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,37061,1690009823266] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 1 2023-07-22 07:10:27,339 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=37061] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,34133,1690009825283 2023-07-22 07:10:27,339 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,37061,1690009823266] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-22 07:10:27,339 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,37061,1690009823266] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 2 2023-07-22 07:10:27,340 DEBUG [RS:1;jenkins-hbase4:41787] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666 2023-07-22 07:10:27,340 DEBUG [RS:1;jenkins-hbase4:41787] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:40817 2023-07-22 07:10:27,340 DEBUG [RS:1;jenkins-hbase4:41787] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=37185 2023-07-22 07:10:27,342 DEBUG [RS:0;jenkins-hbase4:34133] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666 2023-07-22 07:10:27,342 DEBUG [RS:0;jenkins-hbase4:34133] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:40817 2023-07-22 07:10:27,342 DEBUG [RS:0;jenkins-hbase4:34133] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=37185 2023-07-22 07:10:27,345 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=37061] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,39057,1690009825637 2023-07-22 07:10:27,346 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,37061,1690009823266] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-22 07:10:27,346 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,37061,1690009823266] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 3 2023-07-22 07:10:27,347 DEBUG [RS:2;jenkins-hbase4:39057] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666 2023-07-22 07:10:27,347 DEBUG [RS:2;jenkins-hbase4:39057] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:40817 2023-07-22 07:10:27,347 DEBUG [RS:2;jenkins-hbase4:39057] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=37185 2023-07-22 07:10:27,351 DEBUG [Listener at localhost/46507-EventThread] zookeeper.ZKWatcher(600): master:37061-0x1018bdde7740000, quorum=127.0.0.1:56256, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-22 07:10:27,352 DEBUG [RS:0;jenkins-hbase4:34133] zookeeper.ZKUtil(162): regionserver:34133-0x1018bdde7740001, quorum=127.0.0.1:56256, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,34133,1690009825283 2023-07-22 07:10:27,352 DEBUG [RS:1;jenkins-hbase4:41787] zookeeper.ZKUtil(162): regionserver:41787-0x1018bdde7740002, quorum=127.0.0.1:56256, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,41787,1690009825478 2023-07-22 07:10:27,352 WARN [RS:0;jenkins-hbase4:34133] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-22 07:10:27,353 WARN [RS:1;jenkins-hbase4:41787] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-22 07:10:27,353 DEBUG [RS:2;jenkins-hbase4:39057] zookeeper.ZKUtil(162): regionserver:39057-0x1018bdde7740003, quorum=127.0.0.1:56256, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,39057,1690009825637 2023-07-22 07:10:27,353 INFO [RS:1;jenkins-hbase4:41787] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-22 07:10:27,353 WARN [RS:2;jenkins-hbase4:39057] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-22 07:10:27,353 INFO [RS:0;jenkins-hbase4:34133] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-22 07:10:27,354 INFO [RS:2;jenkins-hbase4:39057] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-22 07:10:27,354 DEBUG [RS:0;jenkins-hbase4:34133] regionserver.HRegionServer(1948): logDir=hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/WALs/jenkins-hbase4.apache.org,34133,1690009825283 2023-07-22 07:10:27,354 DEBUG [RS:1;jenkins-hbase4:41787] regionserver.HRegionServer(1948): logDir=hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/WALs/jenkins-hbase4.apache.org,41787,1690009825478 2023-07-22 07:10:27,354 DEBUG [RS:2;jenkins-hbase4:39057] regionserver.HRegionServer(1948): logDir=hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/WALs/jenkins-hbase4.apache.org,39057,1690009825637 2023-07-22 07:10:27,354 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,34133,1690009825283] 2023-07-22 07:10:27,354 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,41787,1690009825478] 2023-07-22 07:10:27,354 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,39057,1690009825637] 2023-07-22 07:10:27,378 DEBUG [RS:0;jenkins-hbase4:34133] zookeeper.ZKUtil(162): regionserver:34133-0x1018bdde7740001, quorum=127.0.0.1:56256, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,34133,1690009825283 2023-07-22 07:10:27,378 DEBUG [RS:1;jenkins-hbase4:41787] zookeeper.ZKUtil(162): regionserver:41787-0x1018bdde7740002, quorum=127.0.0.1:56256, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,34133,1690009825283 2023-07-22 07:10:27,378 DEBUG [RS:2;jenkins-hbase4:39057] zookeeper.ZKUtil(162): regionserver:39057-0x1018bdde7740003, quorum=127.0.0.1:56256, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,34133,1690009825283 2023-07-22 07:10:27,379 DEBUG [RS:0;jenkins-hbase4:34133] zookeeper.ZKUtil(162): regionserver:34133-0x1018bdde7740001, quorum=127.0.0.1:56256, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,41787,1690009825478 2023-07-22 07:10:27,379 DEBUG [RS:1;jenkins-hbase4:41787] zookeeper.ZKUtil(162): regionserver:41787-0x1018bdde7740002, quorum=127.0.0.1:56256, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,41787,1690009825478 2023-07-22 07:10:27,380 DEBUG [RS:0;jenkins-hbase4:34133] zookeeper.ZKUtil(162): regionserver:34133-0x1018bdde7740001, quorum=127.0.0.1:56256, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,39057,1690009825637 2023-07-22 07:10:27,380 DEBUG [RS:2;jenkins-hbase4:39057] zookeeper.ZKUtil(162): regionserver:39057-0x1018bdde7740003, quorum=127.0.0.1:56256, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,41787,1690009825478 2023-07-22 07:10:27,381 DEBUG [RS:1;jenkins-hbase4:41787] zookeeper.ZKUtil(162): regionserver:41787-0x1018bdde7740002, quorum=127.0.0.1:56256, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,39057,1690009825637 2023-07-22 07:10:27,381 DEBUG [RS:2;jenkins-hbase4:39057] zookeeper.ZKUtil(162): regionserver:39057-0x1018bdde7740003, quorum=127.0.0.1:56256, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,39057,1690009825637 2023-07-22 07:10:27,383 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-07-22 07:10:27,384 INFO [PEWorker-1] util.FSTableDescriptors(135): Updated hbase:meta table descriptor to hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-07-22 07:10:27,384 INFO [PEWorker-1] regionserver.HRegion(7675): creating {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666 2023-07-22 07:10:27,396 DEBUG [RS:2;jenkins-hbase4:39057] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-22 07:10:27,396 DEBUG [RS:1;jenkins-hbase4:41787] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-22 07:10:27,396 DEBUG [RS:0;jenkins-hbase4:34133] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-22 07:10:27,418 INFO [RS:2;jenkins-hbase4:39057] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-22 07:10:27,418 INFO [RS:1;jenkins-hbase4:41787] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-22 07:10:27,423 INFO [RS:0;jenkins-hbase4:34133] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-22 07:10:27,424 DEBUG [PEWorker-1] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-22 07:10:27,426 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-22 07:10:27,428 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/data/hbase/meta/1588230740/info 2023-07-22 07:10:27,429 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-22 07:10:27,430 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-22 07:10:27,431 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-22 07:10:27,440 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/data/hbase/meta/1588230740/rep_barrier 2023-07-22 07:10:27,441 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-22 07:10:27,442 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-22 07:10:27,442 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-22 07:10:27,445 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/data/hbase/meta/1588230740/table 2023-07-22 07:10:27,446 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-22 07:10:27,447 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-22 07:10:27,452 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/data/hbase/meta/1588230740 2023-07-22 07:10:27,454 INFO [RS:0;jenkins-hbase4:34133] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-22 07:10:27,454 INFO [RS:1;jenkins-hbase4:41787] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-22 07:10:27,454 INFO [RS:2;jenkins-hbase4:39057] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-22 07:10:27,455 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/data/hbase/meta/1588230740 2023-07-22 07:10:27,461 DEBUG [PEWorker-1] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-22 07:10:27,461 INFO [RS:0;jenkins-hbase4:34133] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-22 07:10:27,461 INFO [RS:1;jenkins-hbase4:41787] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-22 07:10:27,463 INFO [RS:1;jenkins-hbase4:41787] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-22 07:10:27,461 INFO [RS:2;jenkins-hbase4:39057] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-22 07:10:27,462 INFO [RS:0;jenkins-hbase4:34133] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-22 07:10:27,464 INFO [RS:2;jenkins-hbase4:39057] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-22 07:10:27,465 DEBUG [PEWorker-1] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-22 07:10:27,475 INFO [RS:0;jenkins-hbase4:34133] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-22 07:10:27,475 INFO [RS:2;jenkins-hbase4:39057] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-22 07:10:27,475 INFO [RS:1;jenkins-hbase4:41787] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-22 07:10:27,476 DEBUG [PEWorker-1] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/data/hbase/meta/1588230740/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-22 07:10:27,477 INFO [PEWorker-1] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10749964160, jitterRate=0.0011684298515319824}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-22 07:10:27,478 DEBUG [PEWorker-1] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-22 07:10:27,478 DEBUG [PEWorker-1] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-22 07:10:27,478 INFO [PEWorker-1] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-22 07:10:27,478 DEBUG [PEWorker-1] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-22 07:10:27,478 DEBUG [PEWorker-1] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-22 07:10:27,478 DEBUG [PEWorker-1] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-22 07:10:27,479 INFO [PEWorker-1] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-22 07:10:27,479 DEBUG [PEWorker-1] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-22 07:10:27,487 INFO [RS:2;jenkins-hbase4:39057] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-22 07:10:27,488 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_ASSIGN_META, locked=true; InitMetaProcedure table=hbase:meta 2023-07-22 07:10:27,488 INFO [PEWorker-1] procedure.InitMetaProcedure(103): Going to assign meta 2023-07-22 07:10:27,488 DEBUG [RS:2;jenkins-hbase4:39057] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-22 07:10:27,488 INFO [RS:0;jenkins-hbase4:34133] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-22 07:10:27,488 DEBUG [RS:2;jenkins-hbase4:39057] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-22 07:10:27,489 INFO [RS:1;jenkins-hbase4:41787] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-22 07:10:27,489 DEBUG [RS:0;jenkins-hbase4:34133] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-22 07:10:27,489 DEBUG [RS:2;jenkins-hbase4:39057] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-22 07:10:27,489 DEBUG [RS:0;jenkins-hbase4:34133] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-22 07:10:27,489 DEBUG [RS:1;jenkins-hbase4:41787] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-22 07:10:27,489 DEBUG [RS:0;jenkins-hbase4:34133] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-22 07:10:27,489 DEBUG [RS:1;jenkins-hbase4:41787] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-22 07:10:27,489 DEBUG [RS:0;jenkins-hbase4:34133] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-22 07:10:27,489 DEBUG [RS:1;jenkins-hbase4:41787] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-22 07:10:27,489 DEBUG [RS:0;jenkins-hbase4:34133] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-22 07:10:27,489 DEBUG [RS:1;jenkins-hbase4:41787] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-22 07:10:27,489 DEBUG [RS:0;jenkins-hbase4:34133] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-22 07:10:27,489 DEBUG [RS:1;jenkins-hbase4:41787] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-22 07:10:27,489 DEBUG [RS:0;jenkins-hbase4:34133] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-22 07:10:27,490 DEBUG [RS:1;jenkins-hbase4:41787] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-22 07:10:27,489 DEBUG [RS:2;jenkins-hbase4:39057] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-22 07:10:27,490 DEBUG [RS:1;jenkins-hbase4:41787] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-22 07:10:27,490 DEBUG [RS:0;jenkins-hbase4:34133] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-22 07:10:27,490 DEBUG [RS:1;jenkins-hbase4:41787] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-22 07:10:27,490 DEBUG [RS:2;jenkins-hbase4:39057] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-22 07:10:27,490 DEBUG [RS:1;jenkins-hbase4:41787] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-22 07:10:27,490 DEBUG [RS:0;jenkins-hbase4:34133] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-22 07:10:27,490 DEBUG [RS:1;jenkins-hbase4:41787] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-22 07:10:27,490 DEBUG [RS:0;jenkins-hbase4:34133] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-22 07:10:27,490 DEBUG [RS:2;jenkins-hbase4:39057] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-22 07:10:27,490 DEBUG [RS:2;jenkins-hbase4:39057] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-22 07:10:27,491 DEBUG [RS:2;jenkins-hbase4:39057] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-22 07:10:27,491 DEBUG [RS:2;jenkins-hbase4:39057] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-22 07:10:27,491 DEBUG [RS:2;jenkins-hbase4:39057] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-22 07:10:27,492 INFO [RS:0;jenkins-hbase4:34133] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-22 07:10:27,493 INFO [RS:0;jenkins-hbase4:34133] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-22 07:10:27,493 INFO [RS:0;jenkins-hbase4:34133] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-22 07:10:27,497 INFO [RS:1;jenkins-hbase4:41787] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-22 07:10:27,497 INFO [RS:1;jenkins-hbase4:41787] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-22 07:10:27,498 INFO [RS:1;jenkins-hbase4:41787] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-22 07:10:27,500 INFO [RS:2;jenkins-hbase4:39057] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-22 07:10:27,500 INFO [RS:2;jenkins-hbase4:39057] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-22 07:10:27,500 INFO [RS:2;jenkins-hbase4:39057] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-22 07:10:27,500 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN}] 2023-07-22 07:10:27,522 INFO [RS:2;jenkins-hbase4:39057] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-22 07:10:27,523 INFO [RS:1;jenkins-hbase4:41787] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-22 07:10:27,527 INFO [RS:0;jenkins-hbase4:34133] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-22 07:10:27,537 INFO [RS:2;jenkins-hbase4:39057] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,39057,1690009825637-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-22 07:10:27,538 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN 2023-07-22 07:10:27,539 INFO [RS:0;jenkins-hbase4:34133] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,34133,1690009825283-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-22 07:10:27,539 INFO [RS:1;jenkins-hbase4:41787] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,41787,1690009825478-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-22 07:10:27,560 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN; state=OFFLINE, location=null; forceNewPlan=false, retain=false 2023-07-22 07:10:27,579 INFO [RS:2;jenkins-hbase4:39057] regionserver.Replication(203): jenkins-hbase4.apache.org,39057,1690009825637 started 2023-07-22 07:10:27,580 INFO [RS:2;jenkins-hbase4:39057] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,39057,1690009825637, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:39057, sessionid=0x1018bdde7740003 2023-07-22 07:10:27,580 DEBUG [RS:2;jenkins-hbase4:39057] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-22 07:10:27,580 DEBUG [RS:2;jenkins-hbase4:39057] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,39057,1690009825637 2023-07-22 07:10:27,580 DEBUG [RS:2;jenkins-hbase4:39057] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,39057,1690009825637' 2023-07-22 07:10:27,580 DEBUG [RS:2;jenkins-hbase4:39057] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-22 07:10:27,581 DEBUG [RS:2;jenkins-hbase4:39057] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-22 07:10:27,581 DEBUG [RS:2;jenkins-hbase4:39057] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-22 07:10:27,582 DEBUG [RS:2;jenkins-hbase4:39057] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-22 07:10:27,582 DEBUG [RS:2;jenkins-hbase4:39057] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,39057,1690009825637 2023-07-22 07:10:27,582 DEBUG [RS:2;jenkins-hbase4:39057] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,39057,1690009825637' 2023-07-22 07:10:27,582 DEBUG [RS:2;jenkins-hbase4:39057] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-22 07:10:27,583 DEBUG [RS:2;jenkins-hbase4:39057] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-22 07:10:27,583 DEBUG [RS:2;jenkins-hbase4:39057] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-22 07:10:27,583 INFO [RS:2;jenkins-hbase4:39057] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-22 07:10:27,584 INFO [RS:2;jenkins-hbase4:39057] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-22 07:10:27,586 INFO [RS:0;jenkins-hbase4:34133] regionserver.Replication(203): jenkins-hbase4.apache.org,34133,1690009825283 started 2023-07-22 07:10:27,587 INFO [RS:0;jenkins-hbase4:34133] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,34133,1690009825283, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:34133, sessionid=0x1018bdde7740001 2023-07-22 07:10:27,587 DEBUG [RS:0;jenkins-hbase4:34133] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-22 07:10:27,587 DEBUG [RS:0;jenkins-hbase4:34133] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,34133,1690009825283 2023-07-22 07:10:27,588 INFO [RS:1;jenkins-hbase4:41787] regionserver.Replication(203): jenkins-hbase4.apache.org,41787,1690009825478 started 2023-07-22 07:10:27,588 DEBUG [RS:0;jenkins-hbase4:34133] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,34133,1690009825283' 2023-07-22 07:10:27,588 DEBUG [RS:0;jenkins-hbase4:34133] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-22 07:10:27,588 INFO [RS:1;jenkins-hbase4:41787] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,41787,1690009825478, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:41787, sessionid=0x1018bdde7740002 2023-07-22 07:10:27,589 DEBUG [RS:1;jenkins-hbase4:41787] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-22 07:10:27,589 DEBUG [RS:1;jenkins-hbase4:41787] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,41787,1690009825478 2023-07-22 07:10:27,589 DEBUG [RS:1;jenkins-hbase4:41787] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,41787,1690009825478' 2023-07-22 07:10:27,589 DEBUG [RS:1;jenkins-hbase4:41787] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-22 07:10:27,589 DEBUG [RS:0;jenkins-hbase4:34133] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-22 07:10:27,589 DEBUG [RS:1;jenkins-hbase4:41787] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-22 07:10:27,590 DEBUG [RS:0;jenkins-hbase4:34133] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-22 07:10:27,590 DEBUG [RS:0;jenkins-hbase4:34133] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-22 07:10:27,590 DEBUG [RS:0;jenkins-hbase4:34133] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,34133,1690009825283 2023-07-22 07:10:27,590 DEBUG [RS:0;jenkins-hbase4:34133] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,34133,1690009825283' 2023-07-22 07:10:27,591 DEBUG [RS:1;jenkins-hbase4:41787] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-22 07:10:27,591 DEBUG [RS:1;jenkins-hbase4:41787] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-22 07:10:27,591 DEBUG [RS:1;jenkins-hbase4:41787] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,41787,1690009825478 2023-07-22 07:10:27,591 DEBUG [RS:1;jenkins-hbase4:41787] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,41787,1690009825478' 2023-07-22 07:10:27,591 DEBUG [RS:1;jenkins-hbase4:41787] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-22 07:10:27,591 DEBUG [RS:0;jenkins-hbase4:34133] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-22 07:10:27,592 DEBUG [RS:1;jenkins-hbase4:41787] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-22 07:10:27,593 DEBUG [RS:0;jenkins-hbase4:34133] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-22 07:10:27,593 DEBUG [RS:1;jenkins-hbase4:41787] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-22 07:10:27,594 DEBUG [RS:0;jenkins-hbase4:34133] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-22 07:10:27,594 INFO [RS:0;jenkins-hbase4:34133] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-22 07:10:27,593 INFO [RS:1;jenkins-hbase4:41787] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-22 07:10:27,595 INFO [RS:1;jenkins-hbase4:41787] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-22 07:10:27,594 INFO [RS:0;jenkins-hbase4:34133] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-22 07:10:27,695 INFO [RS:2;jenkins-hbase4:39057] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C39057%2C1690009825637, suffix=, logDir=hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/WALs/jenkins-hbase4.apache.org,39057,1690009825637, archiveDir=hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/oldWALs, maxLogs=32 2023-07-22 07:10:27,698 INFO [RS:1;jenkins-hbase4:41787] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C41787%2C1690009825478, suffix=, logDir=hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/WALs/jenkins-hbase4.apache.org,41787,1690009825478, archiveDir=hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/oldWALs, maxLogs=32 2023-07-22 07:10:27,700 INFO [RS:0;jenkins-hbase4:34133] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C34133%2C1690009825283, suffix=, logDir=hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/WALs/jenkins-hbase4.apache.org,34133,1690009825283, archiveDir=hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/oldWALs, maxLogs=32 2023-07-22 07:10:27,713 DEBUG [jenkins-hbase4:37061] assignment.AssignmentManager(2176): Processing assignQueue; systemServersCount=3, allServersCount=3 2023-07-22 07:10:27,753 DEBUG [RS-EventLoopGroup-5-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:40309,DS-d667cf8d-eebc-4ef1-951e-6ae73b21b74c,DISK] 2023-07-22 07:10:27,754 DEBUG [RS-EventLoopGroup-5-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:37967,DS-d5e4f1ef-f5da-4f19-a9eb-a9289adfcfe4,DISK] 2023-07-22 07:10:27,755 DEBUG [jenkins-hbase4:37061] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-22 07:10:27,756 DEBUG [jenkins-hbase4:37061] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-22 07:10:27,757 DEBUG [jenkins-hbase4:37061] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-22 07:10:27,757 DEBUG [jenkins-hbase4:37061] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-22 07:10:27,758 DEBUG [jenkins-hbase4:37061] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-22 07:10:27,758 DEBUG [RS-EventLoopGroup-5-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:42555,DS-3ac6edaa-2267-467e-8da8-ff002dee7b14,DISK] 2023-07-22 07:10:27,814 INFO [PEWorker-2] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,39057,1690009825637, state=OPENING 2023-07-22 07:10:27,815 DEBUG [RS-EventLoopGroup-5-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:42555,DS-3ac6edaa-2267-467e-8da8-ff002dee7b14,DISK] 2023-07-22 07:10:27,815 DEBUG [RS-EventLoopGroup-5-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:40309,DS-d667cf8d-eebc-4ef1-951e-6ae73b21b74c,DISK] 2023-07-22 07:10:27,817 DEBUG [RS-EventLoopGroup-5-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:37967,DS-d5e4f1ef-f5da-4f19-a9eb-a9289adfcfe4,DISK] 2023-07-22 07:10:27,819 DEBUG [RS-EventLoopGroup-5-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:37967,DS-d5e4f1ef-f5da-4f19-a9eb-a9289adfcfe4,DISK] 2023-07-22 07:10:27,819 DEBUG [RS-EventLoopGroup-5-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:40309,DS-d667cf8d-eebc-4ef1-951e-6ae73b21b74c,DISK] 2023-07-22 07:10:27,824 DEBUG [RS-EventLoopGroup-5-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:42555,DS-3ac6edaa-2267-467e-8da8-ff002dee7b14,DISK] 2023-07-22 07:10:27,828 DEBUG [PEWorker-2] zookeeper.MetaTableLocator(240): hbase:meta region location doesn't exist, create it 2023-07-22 07:10:27,831 DEBUG [Listener at localhost/46507-EventThread] zookeeper.ZKWatcher(600): master:37061-0x1018bdde7740000, quorum=127.0.0.1:56256, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-22 07:10:27,832 INFO [RS:0;jenkins-hbase4:34133] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/WALs/jenkins-hbase4.apache.org,34133,1690009825283/jenkins-hbase4.apache.org%2C34133%2C1690009825283.1690009827702 2023-07-22 07:10:27,833 DEBUG [RS:0;jenkins-hbase4:34133] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:40309,DS-d667cf8d-eebc-4ef1-951e-6ae73b21b74c,DISK], DatanodeInfoWithStorage[127.0.0.1:37967,DS-d5e4f1ef-f5da-4f19-a9eb-a9289adfcfe4,DISK], DatanodeInfoWithStorage[127.0.0.1:42555,DS-3ac6edaa-2267-467e-8da8-ff002dee7b14,DISK]] 2023-07-22 07:10:27,834 INFO [RS:2;jenkins-hbase4:39057] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/WALs/jenkins-hbase4.apache.org,39057,1690009825637/jenkins-hbase4.apache.org%2C39057%2C1690009825637.1690009827700 2023-07-22 07:10:27,834 INFO [RS:1;jenkins-hbase4:41787] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/WALs/jenkins-hbase4.apache.org,41787,1690009825478/jenkins-hbase4.apache.org%2C41787%2C1690009825478.1690009827702 2023-07-22 07:10:27,834 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-22 07:10:27,838 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=3, ppid=2, state=RUNNABLE; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,39057,1690009825637}] 2023-07-22 07:10:27,838 DEBUG [RS:2;jenkins-hbase4:39057] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:42555,DS-3ac6edaa-2267-467e-8da8-ff002dee7b14,DISK], DatanodeInfoWithStorage[127.0.0.1:40309,DS-d667cf8d-eebc-4ef1-951e-6ae73b21b74c,DISK], DatanodeInfoWithStorage[127.0.0.1:37967,DS-d5e4f1ef-f5da-4f19-a9eb-a9289adfcfe4,DISK]] 2023-07-22 07:10:27,839 DEBUG [RS:1;jenkins-hbase4:41787] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:40309,DS-d667cf8d-eebc-4ef1-951e-6ae73b21b74c,DISK], DatanodeInfoWithStorage[127.0.0.1:37967,DS-d5e4f1ef-f5da-4f19-a9eb-a9289adfcfe4,DISK], DatanodeInfoWithStorage[127.0.0.1:42555,DS-3ac6edaa-2267-467e-8da8-ff002dee7b14,DISK]] 2023-07-22 07:10:27,861 WARN [ReadOnlyZKClient-127.0.0.1:56256@0x653060c4] client.ZKConnectionRegistry(168): Meta region is in state OPENING 2023-07-22 07:10:27,887 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,37061,1690009823266] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-22 07:10:27,891 INFO [RS-EventLoopGroup-5-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:44332, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-22 07:10:27,892 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=39057] ipc.CallRunner(144): callId: 0 service: ClientService methodName: Get size: 88 connection: 172.31.14.131:44332 deadline: 1690009887891, exception=org.apache.hadoop.hbase.NotServingRegionException: hbase:meta,,1 is not online on jenkins-hbase4.apache.org,39057,1690009825637 2023-07-22 07:10:28,016 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,39057,1690009825637 2023-07-22 07:10:28,020 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-22 07:10:28,025 INFO [RS-EventLoopGroup-5-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:44348, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-22 07:10:28,037 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:meta,,1.1588230740 2023-07-22 07:10:28,038 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-22 07:10:28,041 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C39057%2C1690009825637.meta, suffix=.meta, logDir=hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/WALs/jenkins-hbase4.apache.org,39057,1690009825637, archiveDir=hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/oldWALs, maxLogs=32 2023-07-22 07:10:28,060 DEBUG [RS-EventLoopGroup-5-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:42555,DS-3ac6edaa-2267-467e-8da8-ff002dee7b14,DISK] 2023-07-22 07:10:28,060 DEBUG [RS-EventLoopGroup-5-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:37967,DS-d5e4f1ef-f5da-4f19-a9eb-a9289adfcfe4,DISK] 2023-07-22 07:10:28,061 DEBUG [RS-EventLoopGroup-5-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:40309,DS-d667cf8d-eebc-4ef1-951e-6ae73b21b74c,DISK] 2023-07-22 07:10:28,071 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/WALs/jenkins-hbase4.apache.org,39057,1690009825637/jenkins-hbase4.apache.org%2C39057%2C1690009825637.meta.1690009828042.meta 2023-07-22 07:10:28,071 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:42555,DS-3ac6edaa-2267-467e-8da8-ff002dee7b14,DISK], DatanodeInfoWithStorage[127.0.0.1:37967,DS-d5e4f1ef-f5da-4f19-a9eb-a9289adfcfe4,DISK], DatanodeInfoWithStorage[127.0.0.1:40309,DS-d667cf8d-eebc-4ef1-951e-6ae73b21b74c,DISK]] 2023-07-22 07:10:28,072 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2023-07-22 07:10:28,075 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-22 07:10:28,077 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService 2023-07-22 07:10:28,079 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2023-07-22 07:10:28,084 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table meta 1588230740 2023-07-22 07:10:28,084 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-22 07:10:28,084 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 1588230740 2023-07-22 07:10:28,085 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 1588230740 2023-07-22 07:10:28,087 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-22 07:10:28,088 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/data/hbase/meta/1588230740/info 2023-07-22 07:10:28,089 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/data/hbase/meta/1588230740/info 2023-07-22 07:10:28,089 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-22 07:10:28,090 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-22 07:10:28,090 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-22 07:10:28,091 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/data/hbase/meta/1588230740/rep_barrier 2023-07-22 07:10:28,091 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/data/hbase/meta/1588230740/rep_barrier 2023-07-22 07:10:28,092 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-22 07:10:28,092 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-22 07:10:28,093 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-22 07:10:28,094 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/data/hbase/meta/1588230740/table 2023-07-22 07:10:28,094 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/data/hbase/meta/1588230740/table 2023-07-22 07:10:28,094 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-22 07:10:28,095 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-22 07:10:28,096 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/data/hbase/meta/1588230740 2023-07-22 07:10:28,098 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/data/hbase/meta/1588230740 2023-07-22 07:10:28,102 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-22 07:10:28,104 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-22 07:10:28,105 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9730393760, jitterRate=-0.09378646314144135}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-22 07:10:28,105 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-22 07:10:28,117 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:meta,,1.1588230740, pid=3, masterSystemTime=1690009828013 2023-07-22 07:10:28,135 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:meta,,1.1588230740 2023-07-22 07:10:28,135 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:meta,,1.1588230740 2023-07-22 07:10:28,136 INFO [PEWorker-5] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,39057,1690009825637, state=OPEN 2023-07-22 07:10:28,138 DEBUG [Listener at localhost/46507-EventThread] zookeeper.ZKWatcher(600): master:37061-0x1018bdde7740000, quorum=127.0.0.1:56256, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-07-22 07:10:28,138 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-22 07:10:28,143 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=3, resume processing ppid=2 2023-07-22 07:10:28,143 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=3, ppid=2, state=SUCCESS; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,39057,1690009825637 in 300 msec 2023-07-22 07:10:28,149 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=2, resume processing ppid=1 2023-07-22 07:10:28,149 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=2, ppid=1, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN in 644 msec 2023-07-22 07:10:28,158 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=1, state=SUCCESS; InitMetaProcedure table=hbase:meta in 1.1210 sec 2023-07-22 07:10:28,158 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(953): Wait for region servers to report in: status=null, state=RUNNING, startTime=1690009828158, completionTime=-1 2023-07-22 07:10:28,158 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ServerManager(821): Finished waiting on RegionServer count=3; waited=0ms, expected min=3 server(s), max=3 server(s), master is running 2023-07-22 07:10:28,158 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1517): Joining cluster... 2023-07-22 07:10:28,222 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1529): Number of RegionServers=3 2023-07-22 07:10:28,222 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$RegionInTransitionChore; timeout=60000, timestamp=1690009888222 2023-07-22 07:10:28,222 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$DeadServerMetricRegionChore; timeout=120000, timestamp=1690009948222 2023-07-22 07:10:28,222 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1536): Joined the cluster in 63 msec 2023-07-22 07:10:28,238 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,37061,1690009823266-ClusterStatusChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-22 07:10:28,239 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,37061,1690009823266-BalancerChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-22 07:10:28,239 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,37061,1690009823266-RegionNormalizerChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-22 07:10:28,241 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=CatalogJanitor-jenkins-hbase4:37061, period=300000, unit=MILLISECONDS is enabled. 2023-07-22 07:10:28,241 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HbckChore-, period=3600000, unit=MILLISECONDS is enabled. 2023-07-22 07:10:28,248 DEBUG [master/jenkins-hbase4:0.Chore.1] janitor.CatalogJanitor(175): 2023-07-22 07:10:28,257 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.TableNamespaceManager(92): Namespace table not found. Creating... 2023-07-22 07:10:28,259 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2148): Client=null/null create 'hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-07-22 07:10:28,268 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:namespace 2023-07-22 07:10:28,271 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_PRE_OPERATION 2023-07-22 07:10:28,275 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-22 07:10:28,293 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/.tmp/data/hbase/namespace/5e60efed13e0136703971d9f91095391 2023-07-22 07:10:28,295 DEBUG [HFileArchiver-1] backup.HFileArchiver(153): Directory hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/.tmp/data/hbase/namespace/5e60efed13e0136703971d9f91095391 empty. 2023-07-22 07:10:28,296 DEBUG [HFileArchiver-1] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/.tmp/data/hbase/namespace/5e60efed13e0136703971d9f91095391 2023-07-22 07:10:28,296 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(328): Archived hbase:namespace regions 2023-07-22 07:10:28,351 DEBUG [PEWorker-2] util.FSTableDescriptors(570): Wrote into hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/.tmp/data/hbase/namespace/.tabledesc/.tableinfo.0000000001 2023-07-22 07:10:28,353 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(7675): creating {ENCODED => 5e60efed13e0136703971d9f91095391, NAME => 'hbase:namespace,,1690009828258.5e60efed13e0136703971d9f91095391.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/.tmp 2023-07-22 07:10:28,391 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1690009828258.5e60efed13e0136703971d9f91095391.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-22 07:10:28,391 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1604): Closing 5e60efed13e0136703971d9f91095391, disabling compactions & flushes 2023-07-22 07:10:28,391 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1690009828258.5e60efed13e0136703971d9f91095391. 2023-07-22 07:10:28,391 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1690009828258.5e60efed13e0136703971d9f91095391. 2023-07-22 07:10:28,391 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1690009828258.5e60efed13e0136703971d9f91095391. after waiting 0 ms 2023-07-22 07:10:28,391 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1690009828258.5e60efed13e0136703971d9f91095391. 2023-07-22 07:10:28,391 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1838): Closed hbase:namespace,,1690009828258.5e60efed13e0136703971d9f91095391. 2023-07-22 07:10:28,391 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1558): Region close journal for 5e60efed13e0136703971d9f91095391: 2023-07-22 07:10:28,395 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ADD_TO_META 2023-07-22 07:10:28,408 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,37061,1690009823266] master.HMaster(2148): Client=null/null create 'hbase:rsgroup', {TABLE_ATTRIBUTES => {coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|', METADATA => {'SPLIT_POLICY' => 'org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy'}}}, {NAME => 'm', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-22 07:10:28,410 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,37061,1690009823266] procedure2.ProcedureExecutor(1029): Stored pid=5, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:rsgroup 2023-07-22 07:10:28,413 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_PRE_OPERATION 2023-07-22 07:10:28,416 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-22 07:10:28,417 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:namespace,,1690009828258.5e60efed13e0136703971d9f91095391.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1690009828398"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690009828398"}]},"ts":"1690009828398"} 2023-07-22 07:10:28,420 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/.tmp/data/hbase/rsgroup/ab3ddd109495a2bae7ed1c5a746f16d4 2023-07-22 07:10:28,421 DEBUG [HFileArchiver-2] backup.HFileArchiver(153): Directory hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/.tmp/data/hbase/rsgroup/ab3ddd109495a2bae7ed1c5a746f16d4 empty. 2023-07-22 07:10:28,421 DEBUG [HFileArchiver-2] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/.tmp/data/hbase/rsgroup/ab3ddd109495a2bae7ed1c5a746f16d4 2023-07-22 07:10:28,421 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(328): Archived hbase:rsgroup regions 2023-07-22 07:10:28,463 DEBUG [PEWorker-4] util.FSTableDescriptors(570): Wrote into hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/.tmp/data/hbase/rsgroup/.tabledesc/.tableinfo.0000000001 2023-07-22 07:10:28,468 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(7675): creating {ENCODED => ab3ddd109495a2bae7ed1c5a746f16d4, NAME => 'hbase:rsgroup,,1690009828408.ab3ddd109495a2bae7ed1c5a746f16d4.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:rsgroup', {TABLE_ATTRIBUTES => {coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|', METADATA => {'SPLIT_POLICY' => 'org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy'}}}, {NAME => 'm', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/.tmp 2023-07-22 07:10:28,477 INFO [PEWorker-2] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-22 07:10:28,479 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-22 07:10:28,487 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690009828480"}]},"ts":"1690009828480"} 2023-07-22 07:10:28,495 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLING in hbase:meta 2023-07-22 07:10:28,500 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-22 07:10:28,500 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-22 07:10:28,500 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-22 07:10:28,500 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-22 07:10:28,500 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-22 07:10:28,501 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1690009828408.ab3ddd109495a2bae7ed1c5a746f16d4.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-22 07:10:28,502 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1604): Closing ab3ddd109495a2bae7ed1c5a746f16d4, disabling compactions & flushes 2023-07-22 07:10:28,502 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1690009828408.ab3ddd109495a2bae7ed1c5a746f16d4. 2023-07-22 07:10:28,502 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1690009828408.ab3ddd109495a2bae7ed1c5a746f16d4. 2023-07-22 07:10:28,502 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1690009828408.ab3ddd109495a2bae7ed1c5a746f16d4. after waiting 0 ms 2023-07-22 07:10:28,502 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1690009828408.ab3ddd109495a2bae7ed1c5a746f16d4. 2023-07-22 07:10:28,502 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1690009828408.ab3ddd109495a2bae7ed1c5a746f16d4. 2023-07-22 07:10:28,502 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1558): Region close journal for ab3ddd109495a2bae7ed1c5a746f16d4: 2023-07-22 07:10:28,503 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=6, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=5e60efed13e0136703971d9f91095391, ASSIGN}] 2023-07-22 07:10:28,506 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=6, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=5e60efed13e0136703971d9f91095391, ASSIGN 2023-07-22 07:10:28,507 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_ADD_TO_META 2023-07-22 07:10:28,508 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=6, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:namespace, region=5e60efed13e0136703971d9f91095391, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,41787,1690009825478; forceNewPlan=false, retain=false 2023-07-22 07:10:28,509 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:rsgroup,,1690009828408.ab3ddd109495a2bae7ed1c5a746f16d4.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1690009828509"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690009828509"}]},"ts":"1690009828509"} 2023-07-22 07:10:28,514 INFO [PEWorker-4] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-22 07:10:28,515 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-22 07:10:28,515 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:rsgroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690009828515"}]},"ts":"1690009828515"} 2023-07-22 07:10:28,520 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=hbase:rsgroup, state=ENABLING in hbase:meta 2023-07-22 07:10:28,525 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-22 07:10:28,525 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-22 07:10:28,525 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-22 07:10:28,525 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-22 07:10:28,525 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-22 07:10:28,525 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=7, ppid=5, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:rsgroup, region=ab3ddd109495a2bae7ed1c5a746f16d4, ASSIGN}] 2023-07-22 07:10:28,528 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=7, ppid=5, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:rsgroup, region=ab3ddd109495a2bae7ed1c5a746f16d4, ASSIGN 2023-07-22 07:10:28,529 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=7, ppid=5, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:rsgroup, region=ab3ddd109495a2bae7ed1c5a746f16d4, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,41787,1690009825478; forceNewPlan=false, retain=false 2023-07-22 07:10:28,530 INFO [jenkins-hbase4:37061] balancer.BaseLoadBalancer(1545): Reassigned 2 regions. 2 retained the pre-restart assignment. 2023-07-22 07:10:28,531 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=7 updating hbase:meta row=ab3ddd109495a2bae7ed1c5a746f16d4, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,41787,1690009825478 2023-07-22 07:10:28,531 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=6 updating hbase:meta row=5e60efed13e0136703971d9f91095391, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,41787,1690009825478 2023-07-22 07:10:28,532 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:rsgroup,,1690009828408.ab3ddd109495a2bae7ed1c5a746f16d4.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1690009828531"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690009828531"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690009828531"}]},"ts":"1690009828531"} 2023-07-22 07:10:28,532 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1690009828258.5e60efed13e0136703971d9f91095391.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1690009828531"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690009828531"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690009828531"}]},"ts":"1690009828531"} 2023-07-22 07:10:28,536 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=8, ppid=7, state=RUNNABLE; OpenRegionProcedure ab3ddd109495a2bae7ed1c5a746f16d4, server=jenkins-hbase4.apache.org,41787,1690009825478}] 2023-07-22 07:10:28,538 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=9, ppid=6, state=RUNNABLE; OpenRegionProcedure 5e60efed13e0136703971d9f91095391, server=jenkins-hbase4.apache.org,41787,1690009825478}] 2023-07-22 07:10:28,689 DEBUG [RSProcedureDispatcher-pool-1] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,41787,1690009825478 2023-07-22 07:10:28,689 DEBUG [RSProcedureDispatcher-pool-1] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-22 07:10:28,693 INFO [RS-EventLoopGroup-4-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:40164, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-22 07:10:28,699 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:namespace,,1690009828258.5e60efed13e0136703971d9f91095391. 2023-07-22 07:10:28,700 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 5e60efed13e0136703971d9f91095391, NAME => 'hbase:namespace,,1690009828258.5e60efed13e0136703971d9f91095391.', STARTKEY => '', ENDKEY => ''} 2023-07-22 07:10:28,701 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table namespace 5e60efed13e0136703971d9f91095391 2023-07-22 07:10:28,701 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1690009828258.5e60efed13e0136703971d9f91095391.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-22 07:10:28,701 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 5e60efed13e0136703971d9f91095391 2023-07-22 07:10:28,701 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 5e60efed13e0136703971d9f91095391 2023-07-22 07:10:28,703 INFO [StoreOpener-5e60efed13e0136703971d9f91095391-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 5e60efed13e0136703971d9f91095391 2023-07-22 07:10:28,706 DEBUG [StoreOpener-5e60efed13e0136703971d9f91095391-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/data/hbase/namespace/5e60efed13e0136703971d9f91095391/info 2023-07-22 07:10:28,706 DEBUG [StoreOpener-5e60efed13e0136703971d9f91095391-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/data/hbase/namespace/5e60efed13e0136703971d9f91095391/info 2023-07-22 07:10:28,706 INFO [StoreOpener-5e60efed13e0136703971d9f91095391-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 5e60efed13e0136703971d9f91095391 columnFamilyName info 2023-07-22 07:10:28,707 INFO [StoreOpener-5e60efed13e0136703971d9f91095391-1] regionserver.HStore(310): Store=5e60efed13e0136703971d9f91095391/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-22 07:10:28,708 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/data/hbase/namespace/5e60efed13e0136703971d9f91095391 2023-07-22 07:10:28,709 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/data/hbase/namespace/5e60efed13e0136703971d9f91095391 2023-07-22 07:10:28,713 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 5e60efed13e0136703971d9f91095391 2023-07-22 07:10:28,717 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/data/hbase/namespace/5e60efed13e0136703971d9f91095391/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-22 07:10:28,717 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 5e60efed13e0136703971d9f91095391; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11547428480, jitterRate=0.07543808221817017}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-22 07:10:28,717 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 5e60efed13e0136703971d9f91095391: 2023-07-22 07:10:28,719 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:namespace,,1690009828258.5e60efed13e0136703971d9f91095391., pid=9, masterSystemTime=1690009828689 2023-07-22 07:10:28,723 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:namespace,,1690009828258.5e60efed13e0136703971d9f91095391. 2023-07-22 07:10:28,724 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:namespace,,1690009828258.5e60efed13e0136703971d9f91095391. 2023-07-22 07:10:28,724 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:rsgroup,,1690009828408.ab3ddd109495a2bae7ed1c5a746f16d4. 2023-07-22 07:10:28,724 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => ab3ddd109495a2bae7ed1c5a746f16d4, NAME => 'hbase:rsgroup,,1690009828408.ab3ddd109495a2bae7ed1c5a746f16d4.', STARTKEY => '', ENDKEY => ''} 2023-07-22 07:10:28,724 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-22 07:10:28,724 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:rsgroup,,1690009828408.ab3ddd109495a2bae7ed1c5a746f16d4. service=MultiRowMutationService 2023-07-22 07:10:28,725 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:rsgroup successfully. 2023-07-22 07:10:28,725 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table rsgroup ab3ddd109495a2bae7ed1c5a746f16d4 2023-07-22 07:10:28,725 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1690009828408.ab3ddd109495a2bae7ed1c5a746f16d4.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-22 07:10:28,725 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for ab3ddd109495a2bae7ed1c5a746f16d4 2023-07-22 07:10:28,725 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for ab3ddd109495a2bae7ed1c5a746f16d4 2023-07-22 07:10:28,725 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=6 updating hbase:meta row=5e60efed13e0136703971d9f91095391, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,41787,1690009825478 2023-07-22 07:10:28,726 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:namespace,,1690009828258.5e60efed13e0136703971d9f91095391.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1690009828724"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690009828724"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690009828724"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690009828724"}]},"ts":"1690009828724"} 2023-07-22 07:10:28,727 INFO [StoreOpener-ab3ddd109495a2bae7ed1c5a746f16d4-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family m of region ab3ddd109495a2bae7ed1c5a746f16d4 2023-07-22 07:10:28,729 DEBUG [StoreOpener-ab3ddd109495a2bae7ed1c5a746f16d4-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/data/hbase/rsgroup/ab3ddd109495a2bae7ed1c5a746f16d4/m 2023-07-22 07:10:28,729 DEBUG [StoreOpener-ab3ddd109495a2bae7ed1c5a746f16d4-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/data/hbase/rsgroup/ab3ddd109495a2bae7ed1c5a746f16d4/m 2023-07-22 07:10:28,730 INFO [StoreOpener-ab3ddd109495a2bae7ed1c5a746f16d4-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region ab3ddd109495a2bae7ed1c5a746f16d4 columnFamilyName m 2023-07-22 07:10:28,731 INFO [StoreOpener-ab3ddd109495a2bae7ed1c5a746f16d4-1] regionserver.HStore(310): Store=ab3ddd109495a2bae7ed1c5a746f16d4/m, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-22 07:10:28,732 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/data/hbase/rsgroup/ab3ddd109495a2bae7ed1c5a746f16d4 2023-07-22 07:10:28,733 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/data/hbase/rsgroup/ab3ddd109495a2bae7ed1c5a746f16d4 2023-07-22 07:10:28,733 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=9, resume processing ppid=6 2023-07-22 07:10:28,733 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=9, ppid=6, state=SUCCESS; OpenRegionProcedure 5e60efed13e0136703971d9f91095391, server=jenkins-hbase4.apache.org,41787,1690009825478 in 191 msec 2023-07-22 07:10:28,737 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=6, resume processing ppid=4 2023-07-22 07:10:28,737 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for ab3ddd109495a2bae7ed1c5a746f16d4 2023-07-22 07:10:28,737 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=6, ppid=4, state=SUCCESS; TransitRegionStateProcedure table=hbase:namespace, region=5e60efed13e0136703971d9f91095391, ASSIGN in 230 msec 2023-07-22 07:10:28,738 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-22 07:10:28,739 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690009828738"}]},"ts":"1690009828738"} 2023-07-22 07:10:28,741 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/data/hbase/rsgroup/ab3ddd109495a2bae7ed1c5a746f16d4/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-22 07:10:28,741 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLED in hbase:meta 2023-07-22 07:10:28,742 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened ab3ddd109495a2bae7ed1c5a746f16d4; next sequenceid=2; org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy@20a06568, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-22 07:10:28,742 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for ab3ddd109495a2bae7ed1c5a746f16d4: 2023-07-22 07:10:28,743 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:rsgroup,,1690009828408.ab3ddd109495a2bae7ed1c5a746f16d4., pid=8, masterSystemTime=1690009828689 2023-07-22 07:10:28,745 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_POST_OPERATION 2023-07-22 07:10:28,746 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:rsgroup,,1690009828408.ab3ddd109495a2bae7ed1c5a746f16d4. 2023-07-22 07:10:28,746 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:rsgroup,,1690009828408.ab3ddd109495a2bae7ed1c5a746f16d4. 2023-07-22 07:10:28,747 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=7 updating hbase:meta row=ab3ddd109495a2bae7ed1c5a746f16d4, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,41787,1690009825478 2023-07-22 07:10:28,748 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:rsgroup,,1690009828408.ab3ddd109495a2bae7ed1c5a746f16d4.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1690009828747"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690009828747"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690009828747"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690009828747"}]},"ts":"1690009828747"} 2023-07-22 07:10:28,748 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=4, state=SUCCESS; CreateTableProcedure table=hbase:namespace in 485 msec 2023-07-22 07:10:28,754 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=8, resume processing ppid=7 2023-07-22 07:10:28,754 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=8, ppid=7, state=SUCCESS; OpenRegionProcedure ab3ddd109495a2bae7ed1c5a746f16d4, server=jenkins-hbase4.apache.org,41787,1690009825478 in 214 msec 2023-07-22 07:10:28,758 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=7, resume processing ppid=5 2023-07-22 07:10:28,758 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=7, ppid=5, state=SUCCESS; TransitRegionStateProcedure table=hbase:rsgroup, region=ab3ddd109495a2bae7ed1c5a746f16d4, ASSIGN in 229 msec 2023-07-22 07:10:28,759 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-22 07:10:28,760 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:rsgroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690009828759"}]},"ts":"1690009828759"} 2023-07-22 07:10:28,762 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=hbase:rsgroup, state=ENABLED in hbase:meta 2023-07-22 07:10:28,764 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_POST_OPERATION 2023-07-22 07:10:28,767 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=5, state=SUCCESS; CreateTableProcedure table=hbase:rsgroup in 356 msec 2023-07-22 07:10:28,771 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:37061-0x1018bdde7740000, quorum=127.0.0.1:56256, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/namespace 2023-07-22 07:10:28,772 DEBUG [Listener at localhost/46507-EventThread] zookeeper.ZKWatcher(600): master:37061-0x1018bdde7740000, quorum=127.0.0.1:56256, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/namespace 2023-07-22 07:10:28,772 DEBUG [Listener at localhost/46507-EventThread] zookeeper.ZKWatcher(600): master:37061-0x1018bdde7740000, quorum=127.0.0.1:56256, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-22 07:10:28,796 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-22 07:10:28,799 INFO [RS-EventLoopGroup-4-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:40176, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-22 07:10:28,820 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=10, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=default 2023-07-22 07:10:28,823 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,37061,1690009823266] rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker(840): RSGroup table=hbase:rsgroup is online, refreshing cached information 2023-07-22 07:10:28,823 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,37061,1690009823266] rsgroup.RSGroupInfoManagerImpl(534): Refreshing in Online mode. 2023-07-22 07:10:28,843 DEBUG [Listener at localhost/46507-EventThread] zookeeper.ZKWatcher(600): master:37061-0x1018bdde7740000, quorum=127.0.0.1:56256, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-22 07:10:28,851 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=10, state=SUCCESS; CreateNamespaceProcedure, namespace=default in 45 msec 2023-07-22 07:10:28,853 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=11, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=hbase 2023-07-22 07:10:28,869 DEBUG [Listener at localhost/46507-EventThread] zookeeper.ZKWatcher(600): master:37061-0x1018bdde7740000, quorum=127.0.0.1:56256, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-22 07:10:28,875 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=11, state=SUCCESS; CreateNamespaceProcedure, namespace=hbase in 21 msec 2023-07-22 07:10:28,890 DEBUG [Listener at localhost/46507-EventThread] zookeeper.ZKWatcher(600): master:37061-0x1018bdde7740000, quorum=127.0.0.1:56256, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/default 2023-07-22 07:10:28,894 DEBUG [Listener at localhost/46507-EventThread] zookeeper.ZKWatcher(600): master:37061-0x1018bdde7740000, quorum=127.0.0.1:56256, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/hbase 2023-07-22 07:10:28,894 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1083): Master has completed initialization 3.037sec 2023-07-22 07:10:28,897 INFO [master/jenkins-hbase4:0:becomeActiveMaster] quotas.MasterQuotaManager(97): Quota support disabled 2023-07-22 07:10:28,899 INFO [master/jenkins-hbase4:0:becomeActiveMaster] slowlog.SlowLogMasterService(57): Slow/Large requests logging to system table hbase:slowlog is disabled. Quitting. 2023-07-22 07:10:28,899 INFO [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKWatcher(269): not a secure deployment, proceeding 2023-07-22 07:10:28,901 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,37061,1690009823266-ExpiredMobFileCleanerChore, period=86400, unit=SECONDS is enabled. 2023-07-22 07:10:28,902 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,37061,1690009823266-MobCompactionChore, period=604800, unit=SECONDS is enabled. 2023-07-22 07:10:28,904 DEBUG [Listener at localhost/46507] zookeeper.ReadOnlyZKClient(139): Connect 0x4b32111a to 127.0.0.1:56256 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-22 07:10:28,908 DEBUG [Listener at localhost/46507-EventThread] zookeeper.ZKWatcher(600): master:37061-0x1018bdde7740000, quorum=127.0.0.1:56256, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-22 07:10:28,908 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,37061,1690009823266] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-22 07:10:28,911 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,37061,1690009823266] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-22 07:10:28,914 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1175): Balancer post startup initialization complete, took 0 seconds 2023-07-22 07:10:28,915 DEBUG [Listener at localhost/46507] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@227ce6b3, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-22 07:10:28,920 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,37061,1690009823266] rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker(823): GroupBasedLoadBalancer is now online 2023-07-22 07:10:28,933 DEBUG [hconnection-0x422d8bf2-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-22 07:10:28,947 INFO [RS-EventLoopGroup-5-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:44358, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-22 07:10:28,958 INFO [Listener at localhost/46507] hbase.HBaseTestingUtility(1145): Minicluster is up; activeMaster=jenkins-hbase4.apache.org,37061,1690009823266 2023-07-22 07:10:28,960 INFO [Listener at localhost/46507] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-22 07:10:28,970 DEBUG [Listener at localhost/46507] ipc.RpcConnection(124): Using SIMPLE authentication for service=MasterService, sasl=false 2023-07-22 07:10:28,974 INFO [RS-EventLoopGroup-1-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:38908, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=MasterService 2023-07-22 07:10:28,988 DEBUG [Listener at localhost/46507-EventThread] zookeeper.ZKWatcher(600): master:37061-0x1018bdde7740000, quorum=127.0.0.1:56256, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/balancer 2023-07-22 07:10:28,988 DEBUG [Listener at localhost/46507-EventThread] zookeeper.ZKWatcher(600): master:37061-0x1018bdde7740000, quorum=127.0.0.1:56256, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-22 07:10:28,989 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.MasterRpcServices(492): Client=jenkins//172.31.14.131 set balanceSwitch=false 2023-07-22 07:10:28,994 DEBUG [Listener at localhost/46507] zookeeper.ReadOnlyZKClient(139): Connect 0x11ddf8cf to 127.0.0.1:56256 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-22 07:10:28,999 DEBUG [Listener at localhost/46507] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@6a74af9e, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-22 07:10:28,999 INFO [Listener at localhost/46507] zookeeper.RecoverableZooKeeper(93): Process identifier=VerifyingRSGroupAdminClient connecting to ZooKeeper ensemble=127.0.0.1:56256 2023-07-22 07:10:29,006 DEBUG [Listener at localhost/46507-EventThread] zookeeper.ZKWatcher(600): VerifyingRSGroupAdminClient0x0, quorum=127.0.0.1:56256, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-22 07:10:29,009 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): VerifyingRSGroupAdminClient-0x1018bdde774000a connected 2023-07-22 07:10:29,039 INFO [Listener at localhost/46507] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testTableMoveTruncateAndDrop Thread=419, OpenFileDescriptor=675, MaxFileDescriptor=60000, SystemLoadAverage=361, ProcessCount=180, AvailableMemoryMB=7734 2023-07-22 07:10:29,042 INFO [Listener at localhost/46507] rsgroup.TestRSGroupsBase(132): testTableMoveTruncateAndDrop 2023-07-22 07:10:29,070 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-22 07:10:29,072 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-22 07:10:29,112 INFO [Listener at localhost/46507] rsgroup.TestRSGroupsBase(152): Restoring servers: 1 2023-07-22 07:10:29,125 INFO [Listener at localhost/46507] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-22 07:10:29,125 INFO [Listener at localhost/46507] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-22 07:10:29,125 INFO [Listener at localhost/46507] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-22 07:10:29,125 INFO [Listener at localhost/46507] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-22 07:10:29,125 INFO [Listener at localhost/46507] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-22 07:10:29,125 INFO [Listener at localhost/46507] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-22 07:10:29,126 INFO [Listener at localhost/46507] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-22 07:10:29,128 INFO [Listener at localhost/46507] ipc.NettyRpcServer(120): Bind to /172.31.14.131:33357 2023-07-22 07:10:29,128 INFO [Listener at localhost/46507] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-22 07:10:29,129 DEBUG [Listener at localhost/46507] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-22 07:10:29,131 INFO [Listener at localhost/46507] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-22 07:10:29,134 INFO [Listener at localhost/46507] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-22 07:10:29,136 INFO [Listener at localhost/46507] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:33357 connecting to ZooKeeper ensemble=127.0.0.1:56256 2023-07-22 07:10:29,140 DEBUG [Listener at localhost/46507-EventThread] zookeeper.ZKWatcher(600): regionserver:333570x0, quorum=127.0.0.1:56256, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-22 07:10:29,142 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:33357-0x1018bdde774000b connected 2023-07-22 07:10:29,142 DEBUG [Listener at localhost/46507] zookeeper.ZKUtil(162): regionserver:33357-0x1018bdde774000b, quorum=127.0.0.1:56256, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-22 07:10:29,143 DEBUG [Listener at localhost/46507] zookeeper.ZKUtil(162): regionserver:33357-0x1018bdde774000b, quorum=127.0.0.1:56256, baseZNode=/hbase Set watcher on existing znode=/hbase/running 2023-07-22 07:10:29,144 DEBUG [Listener at localhost/46507] zookeeper.ZKUtil(164): regionserver:33357-0x1018bdde774000b, quorum=127.0.0.1:56256, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-22 07:10:29,149 DEBUG [Listener at localhost/46507] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=33357 2023-07-22 07:10:29,149 DEBUG [Listener at localhost/46507] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=33357 2023-07-22 07:10:29,150 DEBUG [Listener at localhost/46507] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=33357 2023-07-22 07:10:29,150 DEBUG [Listener at localhost/46507] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=33357 2023-07-22 07:10:29,150 DEBUG [Listener at localhost/46507] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=33357 2023-07-22 07:10:29,153 INFO [Listener at localhost/46507] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-22 07:10:29,153 INFO [Listener at localhost/46507] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-22 07:10:29,153 INFO [Listener at localhost/46507] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-22 07:10:29,153 INFO [Listener at localhost/46507] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-22 07:10:29,153 INFO [Listener at localhost/46507] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-22 07:10:29,154 INFO [Listener at localhost/46507] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-22 07:10:29,154 INFO [Listener at localhost/46507] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-22 07:10:29,154 INFO [Listener at localhost/46507] http.HttpServer(1146): Jetty bound to port 38675 2023-07-22 07:10:29,154 INFO [Listener at localhost/46507] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-22 07:10:29,157 INFO [Listener at localhost/46507] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-22 07:10:29,158 INFO [Listener at localhost/46507] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@726126ec{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1e6bc595-35d7-277c-1036-cd06713ba4c7/hadoop.log.dir/,AVAILABLE} 2023-07-22 07:10:29,158 INFO [Listener at localhost/46507] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-22 07:10:29,158 INFO [Listener at localhost/46507] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@2474d7bd{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-22 07:10:29,297 INFO [Listener at localhost/46507] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-22 07:10:29,297 INFO [Listener at localhost/46507] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-22 07:10:29,298 INFO [Listener at localhost/46507] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-22 07:10:29,298 INFO [Listener at localhost/46507] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-22 07:10:29,299 INFO [Listener at localhost/46507] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-22 07:10:29,300 INFO [Listener at localhost/46507] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@3b6361e2{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1e6bc595-35d7-277c-1036-cd06713ba4c7/java.io.tmpdir/jetty-0_0_0_0-38675-hbase-server-2_4_18-SNAPSHOT_jar-_-any-5395979307441267128/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-22 07:10:29,302 INFO [Listener at localhost/46507] server.AbstractConnector(333): Started ServerConnector@702b003c{HTTP/1.1, (http/1.1)}{0.0.0.0:38675} 2023-07-22 07:10:29,302 INFO [Listener at localhost/46507] server.Server(415): Started @11707ms 2023-07-22 07:10:29,305 INFO [RS:3;jenkins-hbase4:33357] regionserver.HRegionServer(951): ClusterId : d8ac43d8-c035-4739-ae09-79d2c1778afb 2023-07-22 07:10:29,306 DEBUG [RS:3;jenkins-hbase4:33357] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-22 07:10:29,309 DEBUG [RS:3;jenkins-hbase4:33357] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-22 07:10:29,309 DEBUG [RS:3;jenkins-hbase4:33357] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-22 07:10:29,311 DEBUG [RS:3;jenkins-hbase4:33357] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-22 07:10:29,313 DEBUG [RS:3;jenkins-hbase4:33357] zookeeper.ReadOnlyZKClient(139): Connect 0x13fbea4d to 127.0.0.1:56256 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-22 07:10:29,328 DEBUG [RS:3;jenkins-hbase4:33357] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@1d3c1642, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-22 07:10:29,328 DEBUG [RS:3;jenkins-hbase4:33357] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@1903813d, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-22 07:10:29,341 DEBUG [RS:3;jenkins-hbase4:33357] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:3;jenkins-hbase4:33357 2023-07-22 07:10:29,341 INFO [RS:3;jenkins-hbase4:33357] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-22 07:10:29,341 INFO [RS:3;jenkins-hbase4:33357] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-22 07:10:29,342 DEBUG [RS:3;jenkins-hbase4:33357] regionserver.HRegionServer(1022): About to register with Master. 2023-07-22 07:10:29,342 INFO [RS:3;jenkins-hbase4:33357] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,37061,1690009823266 with isa=jenkins-hbase4.apache.org/172.31.14.131:33357, startcode=1690009829125 2023-07-22 07:10:29,343 DEBUG [RS:3;jenkins-hbase4:33357] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-22 07:10:29,347 INFO [RS-EventLoopGroup-1-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:51405, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.3 (auth:SIMPLE), service=RegionServerStatusService 2023-07-22 07:10:29,348 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=37061] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,33357,1690009829125 2023-07-22 07:10:29,348 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,37061,1690009823266] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-22 07:10:29,348 DEBUG [RS:3;jenkins-hbase4:33357] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666 2023-07-22 07:10:29,349 DEBUG [RS:3;jenkins-hbase4:33357] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:40817 2023-07-22 07:10:29,349 DEBUG [RS:3;jenkins-hbase4:33357] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=37185 2023-07-22 07:10:29,355 DEBUG [Listener at localhost/46507-EventThread] zookeeper.ZKWatcher(600): regionserver:34133-0x1018bdde7740001, quorum=127.0.0.1:56256, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-22 07:10:29,355 DEBUG [Listener at localhost/46507-EventThread] zookeeper.ZKWatcher(600): regionserver:41787-0x1018bdde7740002, quorum=127.0.0.1:56256, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-22 07:10:29,355 DEBUG [Listener at localhost/46507-EventThread] zookeeper.ZKWatcher(600): regionserver:39057-0x1018bdde7740003, quorum=127.0.0.1:56256, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-22 07:10:29,355 DEBUG [Listener at localhost/46507-EventThread] zookeeper.ZKWatcher(600): master:37061-0x1018bdde7740000, quorum=127.0.0.1:56256, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-22 07:10:29,356 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,33357,1690009829125] 2023-07-22 07:10:29,356 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:34133-0x1018bdde7740001, quorum=127.0.0.1:56256, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,34133,1690009825283 2023-07-22 07:10:29,357 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,37061,1690009823266] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-22 07:10:29,357 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:41787-0x1018bdde7740002, quorum=127.0.0.1:56256, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,34133,1690009825283 2023-07-22 07:10:29,357 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:39057-0x1018bdde7740003, quorum=127.0.0.1:56256, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,34133,1690009825283 2023-07-22 07:10:29,357 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,37061,1690009823266] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-22 07:10:29,358 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:34133-0x1018bdde7740001, quorum=127.0.0.1:56256, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,41787,1690009825478 2023-07-22 07:10:29,358 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:39057-0x1018bdde7740003, quorum=127.0.0.1:56256, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,41787,1690009825478 2023-07-22 07:10:29,359 DEBUG [RS:3;jenkins-hbase4:33357] zookeeper.ZKUtil(162): regionserver:33357-0x1018bdde774000b, quorum=127.0.0.1:56256, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,33357,1690009829125 2023-07-22 07:10:29,360 WARN [RS:3;jenkins-hbase4:33357] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-22 07:10:29,360 INFO [RS:3;jenkins-hbase4:33357] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-22 07:10:29,360 DEBUG [RS:3;jenkins-hbase4:33357] regionserver.HRegionServer(1948): logDir=hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/WALs/jenkins-hbase4.apache.org,33357,1690009829125 2023-07-22 07:10:29,371 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,37061,1690009823266] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 4 2023-07-22 07:10:29,371 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:41787-0x1018bdde7740002, quorum=127.0.0.1:56256, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,41787,1690009825478 2023-07-22 07:10:29,372 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:34133-0x1018bdde7740001, quorum=127.0.0.1:56256, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,39057,1690009825637 2023-07-22 07:10:29,372 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:39057-0x1018bdde7740003, quorum=127.0.0.1:56256, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,39057,1690009825637 2023-07-22 07:10:29,372 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:41787-0x1018bdde7740002, quorum=127.0.0.1:56256, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,39057,1690009825637 2023-07-22 07:10:29,372 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:39057-0x1018bdde7740003, quorum=127.0.0.1:56256, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,33357,1690009829125 2023-07-22 07:10:29,372 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:34133-0x1018bdde7740001, quorum=127.0.0.1:56256, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,33357,1690009829125 2023-07-22 07:10:29,373 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:41787-0x1018bdde7740002, quorum=127.0.0.1:56256, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,33357,1690009829125 2023-07-22 07:10:29,376 DEBUG [RS:3;jenkins-hbase4:33357] zookeeper.ZKUtil(162): regionserver:33357-0x1018bdde774000b, quorum=127.0.0.1:56256, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,34133,1690009825283 2023-07-22 07:10:29,376 DEBUG [RS:3;jenkins-hbase4:33357] zookeeper.ZKUtil(162): regionserver:33357-0x1018bdde774000b, quorum=127.0.0.1:56256, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,41787,1690009825478 2023-07-22 07:10:29,377 DEBUG [RS:3;jenkins-hbase4:33357] zookeeper.ZKUtil(162): regionserver:33357-0x1018bdde774000b, quorum=127.0.0.1:56256, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,39057,1690009825637 2023-07-22 07:10:29,378 DEBUG [RS:3;jenkins-hbase4:33357] zookeeper.ZKUtil(162): regionserver:33357-0x1018bdde774000b, quorum=127.0.0.1:56256, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,33357,1690009829125 2023-07-22 07:10:29,379 DEBUG [RS:3;jenkins-hbase4:33357] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-22 07:10:29,379 INFO [RS:3;jenkins-hbase4:33357] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-22 07:10:29,381 INFO [RS:3;jenkins-hbase4:33357] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-22 07:10:29,382 INFO [RS:3;jenkins-hbase4:33357] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-22 07:10:29,382 INFO [RS:3;jenkins-hbase4:33357] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-22 07:10:29,382 INFO [RS:3;jenkins-hbase4:33357] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-22 07:10:29,384 INFO [RS:3;jenkins-hbase4:33357] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-22 07:10:29,384 DEBUG [RS:3;jenkins-hbase4:33357] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-22 07:10:29,384 DEBUG [RS:3;jenkins-hbase4:33357] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-22 07:10:29,385 DEBUG [RS:3;jenkins-hbase4:33357] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-22 07:10:29,385 DEBUG [RS:3;jenkins-hbase4:33357] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-22 07:10:29,385 DEBUG [RS:3;jenkins-hbase4:33357] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-22 07:10:29,385 DEBUG [RS:3;jenkins-hbase4:33357] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-22 07:10:29,385 DEBUG [RS:3;jenkins-hbase4:33357] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-22 07:10:29,385 DEBUG [RS:3;jenkins-hbase4:33357] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-22 07:10:29,385 DEBUG [RS:3;jenkins-hbase4:33357] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-22 07:10:29,385 DEBUG [RS:3;jenkins-hbase4:33357] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-22 07:10:29,386 INFO [RS:3;jenkins-hbase4:33357] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-22 07:10:29,386 INFO [RS:3;jenkins-hbase4:33357] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-22 07:10:29,386 INFO [RS:3;jenkins-hbase4:33357] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-22 07:10:29,406 INFO [RS:3;jenkins-hbase4:33357] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-22 07:10:29,406 INFO [RS:3;jenkins-hbase4:33357] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,33357,1690009829125-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-22 07:10:29,418 INFO [RS:3;jenkins-hbase4:33357] regionserver.Replication(203): jenkins-hbase4.apache.org,33357,1690009829125 started 2023-07-22 07:10:29,418 INFO [RS:3;jenkins-hbase4:33357] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,33357,1690009829125, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:33357, sessionid=0x1018bdde774000b 2023-07-22 07:10:29,418 DEBUG [RS:3;jenkins-hbase4:33357] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-22 07:10:29,418 DEBUG [RS:3;jenkins-hbase4:33357] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,33357,1690009829125 2023-07-22 07:10:29,418 DEBUG [RS:3;jenkins-hbase4:33357] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,33357,1690009829125' 2023-07-22 07:10:29,418 DEBUG [RS:3;jenkins-hbase4:33357] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-22 07:10:29,419 DEBUG [RS:3;jenkins-hbase4:33357] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-22 07:10:29,420 DEBUG [RS:3;jenkins-hbase4:33357] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-22 07:10:29,420 DEBUG [RS:3;jenkins-hbase4:33357] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-22 07:10:29,420 DEBUG [RS:3;jenkins-hbase4:33357] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,33357,1690009829125 2023-07-22 07:10:29,420 DEBUG [RS:3;jenkins-hbase4:33357] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,33357,1690009829125' 2023-07-22 07:10:29,420 DEBUG [RS:3;jenkins-hbase4:33357] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-22 07:10:29,423 DEBUG [RS:3;jenkins-hbase4:33357] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-22 07:10:29,423 DEBUG [RS:3;jenkins-hbase4:33357] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-22 07:10:29,423 INFO [RS:3;jenkins-hbase4:33357] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-22 07:10:29,424 INFO [RS:3;jenkins-hbase4:33357] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-22 07:10:29,425 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-22 07:10:29,429 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-22 07:10:29,430 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-22 07:10:29,432 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-22 07:10:29,436 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-22 07:10:29,439 DEBUG [hconnection-0x2e79eb29-metaLookup-shared--pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-22 07:10:29,443 INFO [RS-EventLoopGroup-5-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:44370, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-22 07:10:29,451 DEBUG [hconnection-0x2e79eb29-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-22 07:10:29,453 INFO [RS-EventLoopGroup-4-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:40190, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-22 07:10:29,456 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-22 07:10:29,456 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-22 07:10:29,466 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:37061] to rsgroup master 2023-07-22 07:10:29,466 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37061 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-22 07:10:29,466 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] ipc.CallRunner(144): callId: 20 service: MasterService methodName: ExecMasterService size: 118 connection: 172.31.14.131:38908 deadline: 1690011029465, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37061 is either offline or it does not exist. 2023-07-22 07:10:29,467 WARN [Listener at localhost/46507] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37061 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37061 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-22 07:10:29,469 INFO [Listener at localhost/46507] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-22 07:10:29,470 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-22 07:10:29,470 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-22 07:10:29,471 INFO [Listener at localhost/46507] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:33357, jenkins-hbase4.apache.org:34133, jenkins-hbase4.apache.org:39057, jenkins-hbase4.apache.org:41787], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-22 07:10:29,476 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-22 07:10:29,476 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-22 07:10:29,478 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-22 07:10:29,478 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-22 07:10:29,479 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup Group_testTableMoveTruncateAndDrop_130911739 2023-07-22 07:10:29,483 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-22 07:10:29,484 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-22 07:10:29,484 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testTableMoveTruncateAndDrop_130911739 2023-07-22 07:10:29,487 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-22 07:10:29,491 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-22 07:10:29,496 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-22 07:10:29,497 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-22 07:10:29,501 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:34133, jenkins-hbase4.apache.org:33357] to rsgroup Group_testTableMoveTruncateAndDrop_130911739 2023-07-22 07:10:29,506 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-22 07:10:29,506 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-22 07:10:29,507 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testTableMoveTruncateAndDrop_130911739 2023-07-22 07:10:29,508 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-22 07:10:29,512 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group default, current retry=0 2023-07-22 07:10:29,512 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,33357,1690009829125, jenkins-hbase4.apache.org,34133,1690009825283] are moved back to default 2023-07-22 07:10:29,512 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminServer(438): Move servers done: default => Group_testTableMoveTruncateAndDrop_130911739 2023-07-22 07:10:29,512 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-22 07:10:29,516 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-22 07:10:29,516 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-22 07:10:29,520 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=Group_testTableMoveTruncateAndDrop_130911739 2023-07-22 07:10:29,520 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-22 07:10:29,527 INFO [RS:3;jenkins-hbase4:33357] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C33357%2C1690009829125, suffix=, logDir=hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/WALs/jenkins-hbase4.apache.org,33357,1690009829125, archiveDir=hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/oldWALs, maxLogs=32 2023-07-22 07:10:29,536 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-22 07:10:29,540 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] procedure2.ProcedureExecutor(1029): Stored pid=12, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-22 07:10:29,543 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop execute state=CREATE_TABLE_PRE_OPERATION 2023-07-22 07:10:29,551 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "Group_testTableMoveTruncateAndDrop" procId is: 12 2023-07-22 07:10:29,552 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-22 07:10:29,555 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-22 07:10:29,555 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testTableMoveTruncateAndDrop_130911739 2023-07-22 07:10:29,556 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-22 07:10:29,564 DEBUG [RS-EventLoopGroup-7-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:40309,DS-d667cf8d-eebc-4ef1-951e-6ae73b21b74c,DISK] 2023-07-22 07:10:29,564 DEBUG [RS-EventLoopGroup-7-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:42555,DS-3ac6edaa-2267-467e-8da8-ff002dee7b14,DISK] 2023-07-22 07:10:29,565 DEBUG [RS-EventLoopGroup-7-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:37967,DS-d5e4f1ef-f5da-4f19-a9eb-a9289adfcfe4,DISK] 2023-07-22 07:10:29,570 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.MasterRpcServices(1230): Checking to see if procedure is done pid=12 2023-07-22 07:10:29,574 INFO [RS:3;jenkins-hbase4:33357] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/WALs/jenkins-hbase4.apache.org,33357,1690009829125/jenkins-hbase4.apache.org%2C33357%2C1690009829125.1690009829529 2023-07-22 07:10:29,575 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-22 07:10:29,575 DEBUG [RS:3;jenkins-hbase4:33357] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:40309,DS-d667cf8d-eebc-4ef1-951e-6ae73b21b74c,DISK], DatanodeInfoWithStorage[127.0.0.1:42555,DS-3ac6edaa-2267-467e-8da8-ff002dee7b14,DISK], DatanodeInfoWithStorage[127.0.0.1:37967,DS-d5e4f1ef-f5da-4f19-a9eb-a9289adfcfe4,DISK]] 2023-07-22 07:10:29,582 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/.tmp/data/default/Group_testTableMoveTruncateAndDrop/97ad6d84bd0754d12df91fb12808fc69 2023-07-22 07:10:29,583 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/.tmp/data/default/Group_testTableMoveTruncateAndDrop/cb5cc297e84ee621ef116b994f44e02c 2023-07-22 07:10:29,584 DEBUG [HFileArchiver-3] backup.HFileArchiver(153): Directory hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/.tmp/data/default/Group_testTableMoveTruncateAndDrop/97ad6d84bd0754d12df91fb12808fc69 empty. 2023-07-22 07:10:29,584 DEBUG [HFileArchiver-5] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/.tmp/data/default/Group_testTableMoveTruncateAndDrop/4416e68564631a31aeee1cd90765b524 2023-07-22 07:10:29,586 DEBUG [HFileArchiver-4] backup.HFileArchiver(153): Directory hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/.tmp/data/default/Group_testTableMoveTruncateAndDrop/cb5cc297e84ee621ef116b994f44e02c empty. 2023-07-22 07:10:29,586 DEBUG [HFileArchiver-6] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/.tmp/data/default/Group_testTableMoveTruncateAndDrop/c8e1347c16201fa7264dd07badd12c71 2023-07-22 07:10:29,586 DEBUG [HFileArchiver-7] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/.tmp/data/default/Group_testTableMoveTruncateAndDrop/6e16235d88cb385592fd0b9887a65c2f 2023-07-22 07:10:29,587 DEBUG [HFileArchiver-5] backup.HFileArchiver(153): Directory hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/.tmp/data/default/Group_testTableMoveTruncateAndDrop/4416e68564631a31aeee1cd90765b524 empty. 2023-07-22 07:10:29,587 DEBUG [HFileArchiver-3] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/.tmp/data/default/Group_testTableMoveTruncateAndDrop/97ad6d84bd0754d12df91fb12808fc69 2023-07-22 07:10:29,587 DEBUG [HFileArchiver-7] backup.HFileArchiver(153): Directory hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/.tmp/data/default/Group_testTableMoveTruncateAndDrop/6e16235d88cb385592fd0b9887a65c2f empty. 2023-07-22 07:10:29,588 DEBUG [HFileArchiver-6] backup.HFileArchiver(153): Directory hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/.tmp/data/default/Group_testTableMoveTruncateAndDrop/c8e1347c16201fa7264dd07badd12c71 empty. 2023-07-22 07:10:29,588 DEBUG [HFileArchiver-4] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/.tmp/data/default/Group_testTableMoveTruncateAndDrop/cb5cc297e84ee621ef116b994f44e02c 2023-07-22 07:10:29,588 DEBUG [HFileArchiver-7] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/.tmp/data/default/Group_testTableMoveTruncateAndDrop/6e16235d88cb385592fd0b9887a65c2f 2023-07-22 07:10:29,589 DEBUG [HFileArchiver-5] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/.tmp/data/default/Group_testTableMoveTruncateAndDrop/4416e68564631a31aeee1cd90765b524 2023-07-22 07:10:29,589 DEBUG [HFileArchiver-6] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/.tmp/data/default/Group_testTableMoveTruncateAndDrop/c8e1347c16201fa7264dd07badd12c71 2023-07-22 07:10:29,589 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(328): Archived Group_testTableMoveTruncateAndDrop regions 2023-07-22 07:10:29,623 DEBUG [PEWorker-5] util.FSTableDescriptors(570): Wrote into hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/.tmp/data/default/Group_testTableMoveTruncateAndDrop/.tabledesc/.tableinfo.0000000001 2023-07-22 07:10:29,626 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(7675): creating {ENCODED => 97ad6d84bd0754d12df91fb12808fc69, NAME => 'Group_testTableMoveTruncateAndDrop,,1690009829529.97ad6d84bd0754d12df91fb12808fc69.', STARTKEY => '', ENDKEY => 'aaaaa'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/.tmp 2023-07-22 07:10:29,627 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(7675): creating {ENCODED => cb5cc297e84ee621ef116b994f44e02c, NAME => 'Group_testTableMoveTruncateAndDrop,aaaaa,1690009829529.cb5cc297e84ee621ef116b994f44e02c.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/.tmp 2023-07-22 07:10:29,627 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(7675): creating {ENCODED => 4416e68564631a31aeee1cd90765b524, NAME => 'Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690009829529.4416e68564631a31aeee1cd90765b524.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/.tmp 2023-07-22 07:10:29,677 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.MasterRpcServices(1230): Checking to see if procedure is done pid=12 2023-07-22 07:10:29,681 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,,1690009829529.97ad6d84bd0754d12df91fb12808fc69.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-22 07:10:29,682 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1604): Closing 97ad6d84bd0754d12df91fb12808fc69, disabling compactions & flushes 2023-07-22 07:10:29,682 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,,1690009829529.97ad6d84bd0754d12df91fb12808fc69. 2023-07-22 07:10:29,682 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,,1690009829529.97ad6d84bd0754d12df91fb12808fc69. 2023-07-22 07:10:29,682 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,,1690009829529.97ad6d84bd0754d12df91fb12808fc69. after waiting 0 ms 2023-07-22 07:10:29,683 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,,1690009829529.97ad6d84bd0754d12df91fb12808fc69. 2023-07-22 07:10:29,683 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,,1690009829529.97ad6d84bd0754d12df91fb12808fc69. 2023-07-22 07:10:29,683 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1558): Region close journal for 97ad6d84bd0754d12df91fb12808fc69: 2023-07-22 07:10:29,684 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(7675): creating {ENCODED => c8e1347c16201fa7264dd07badd12c71, NAME => 'Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690009829529.c8e1347c16201fa7264dd07badd12c71.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/.tmp 2023-07-22 07:10:29,691 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690009829529.4416e68564631a31aeee1cd90765b524.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-22 07:10:29,692 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1604): Closing 4416e68564631a31aeee1cd90765b524, disabling compactions & flushes 2023-07-22 07:10:29,692 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690009829529.4416e68564631a31aeee1cd90765b524. 2023-07-22 07:10:29,692 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690009829529.4416e68564631a31aeee1cd90765b524. 2023-07-22 07:10:29,692 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690009829529.4416e68564631a31aeee1cd90765b524. after waiting 0 ms 2023-07-22 07:10:29,692 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690009829529.4416e68564631a31aeee1cd90765b524. 2023-07-22 07:10:29,692 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690009829529.4416e68564631a31aeee1cd90765b524. 2023-07-22 07:10:29,692 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1558): Region close journal for 4416e68564631a31aeee1cd90765b524: 2023-07-22 07:10:29,693 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(7675): creating {ENCODED => 6e16235d88cb385592fd0b9887a65c2f, NAME => 'Group_testTableMoveTruncateAndDrop,zzzzz,1690009829529.6e16235d88cb385592fd0b9887a65c2f.', STARTKEY => 'zzzzz', ENDKEY => ''}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/.tmp 2023-07-22 07:10:29,703 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,aaaaa,1690009829529.cb5cc297e84ee621ef116b994f44e02c.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-22 07:10:29,704 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1604): Closing cb5cc297e84ee621ef116b994f44e02c, disabling compactions & flushes 2023-07-22 07:10:29,704 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,aaaaa,1690009829529.cb5cc297e84ee621ef116b994f44e02c. 2023-07-22 07:10:29,705 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1690009829529.cb5cc297e84ee621ef116b994f44e02c. 2023-07-22 07:10:29,705 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1690009829529.cb5cc297e84ee621ef116b994f44e02c. after waiting 0 ms 2023-07-22 07:10:29,705 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,aaaaa,1690009829529.cb5cc297e84ee621ef116b994f44e02c. 2023-07-22 07:10:29,705 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,aaaaa,1690009829529.cb5cc297e84ee621ef116b994f44e02c. 2023-07-22 07:10:29,705 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1558): Region close journal for cb5cc297e84ee621ef116b994f44e02c: 2023-07-22 07:10:29,741 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690009829529.c8e1347c16201fa7264dd07badd12c71.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-22 07:10:29,742 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1604): Closing c8e1347c16201fa7264dd07badd12c71, disabling compactions & flushes 2023-07-22 07:10:29,742 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690009829529.c8e1347c16201fa7264dd07badd12c71. 2023-07-22 07:10:29,742 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690009829529.c8e1347c16201fa7264dd07badd12c71. 2023-07-22 07:10:29,742 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690009829529.c8e1347c16201fa7264dd07badd12c71. after waiting 0 ms 2023-07-22 07:10:29,742 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690009829529.c8e1347c16201fa7264dd07badd12c71. 2023-07-22 07:10:29,742 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690009829529.c8e1347c16201fa7264dd07badd12c71. 2023-07-22 07:10:29,742 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1558): Region close journal for c8e1347c16201fa7264dd07badd12c71: 2023-07-22 07:10:29,881 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.MasterRpcServices(1230): Checking to see if procedure is done pid=12 2023-07-22 07:10:30,142 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,zzzzz,1690009829529.6e16235d88cb385592fd0b9887a65c2f.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-22 07:10:30,142 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1604): Closing 6e16235d88cb385592fd0b9887a65c2f, disabling compactions & flushes 2023-07-22 07:10:30,142 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,zzzzz,1690009829529.6e16235d88cb385592fd0b9887a65c2f. 2023-07-22 07:10:30,142 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1690009829529.6e16235d88cb385592fd0b9887a65c2f. 2023-07-22 07:10:30,142 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1690009829529.6e16235d88cb385592fd0b9887a65c2f. after waiting 0 ms 2023-07-22 07:10:30,142 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,zzzzz,1690009829529.6e16235d88cb385592fd0b9887a65c2f. 2023-07-22 07:10:30,142 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,zzzzz,1690009829529.6e16235d88cb385592fd0b9887a65c2f. 2023-07-22 07:10:30,142 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1558): Region close journal for 6e16235d88cb385592fd0b9887a65c2f: 2023-07-22 07:10:30,146 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop execute state=CREATE_TABLE_ADD_TO_META 2023-07-22 07:10:30,148 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,,1690009829529.97ad6d84bd0754d12df91fb12808fc69.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1690009830147"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690009830147"}]},"ts":"1690009830147"} 2023-07-22 07:10:30,148 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1690009829529.4416e68564631a31aeee1cd90765b524.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690009830147"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690009830147"}]},"ts":"1690009830147"} 2023-07-22 07:10:30,148 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1690009829529.cb5cc297e84ee621ef116b994f44e02c.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690009830147"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690009830147"}]},"ts":"1690009830147"} 2023-07-22 07:10:30,148 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1690009829529.c8e1347c16201fa7264dd07badd12c71.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690009830147"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690009830147"}]},"ts":"1690009830147"} 2023-07-22 07:10:30,149 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1690009829529.6e16235d88cb385592fd0b9887a65c2f.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1690009830147"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690009830147"}]},"ts":"1690009830147"} 2023-07-22 07:10:30,183 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.MasterRpcServices(1230): Checking to see if procedure is done pid=12 2023-07-22 07:10:30,197 INFO [PEWorker-5] hbase.MetaTableAccessor(1496): Added 5 regions to meta. 2023-07-22 07:10:30,198 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-22 07:10:30,199 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690009830198"}]},"ts":"1690009830198"} 2023-07-22 07:10:30,201 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=ENABLING in hbase:meta 2023-07-22 07:10:30,210 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-22 07:10:30,211 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-22 07:10:30,211 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-22 07:10:30,211 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-22 07:10:30,212 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=13, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=97ad6d84bd0754d12df91fb12808fc69, ASSIGN}, {pid=14, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=cb5cc297e84ee621ef116b994f44e02c, ASSIGN}, {pid=15, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=4416e68564631a31aeee1cd90765b524, ASSIGN}, {pid=16, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=c8e1347c16201fa7264dd07badd12c71, ASSIGN}, {pid=17, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=6e16235d88cb385592fd0b9887a65c2f, ASSIGN}] 2023-07-22 07:10:30,216 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=17, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=6e16235d88cb385592fd0b9887a65c2f, ASSIGN 2023-07-22 07:10:30,216 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=16, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=c8e1347c16201fa7264dd07badd12c71, ASSIGN 2023-07-22 07:10:30,217 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=15, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=4416e68564631a31aeee1cd90765b524, ASSIGN 2023-07-22 07:10:30,218 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=14, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=cb5cc297e84ee621ef116b994f44e02c, ASSIGN 2023-07-22 07:10:30,220 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=17, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=6e16235d88cb385592fd0b9887a65c2f, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,39057,1690009825637; forceNewPlan=false, retain=false 2023-07-22 07:10:30,220 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=16, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=c8e1347c16201fa7264dd07badd12c71, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,41787,1690009825478; forceNewPlan=false, retain=false 2023-07-22 07:10:30,220 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=13, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=97ad6d84bd0754d12df91fb12808fc69, ASSIGN 2023-07-22 07:10:30,220 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=15, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=4416e68564631a31aeee1cd90765b524, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,41787,1690009825478; forceNewPlan=false, retain=false 2023-07-22 07:10:30,220 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=14, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=cb5cc297e84ee621ef116b994f44e02c, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,39057,1690009825637; forceNewPlan=false, retain=false 2023-07-22 07:10:30,222 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=13, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=97ad6d84bd0754d12df91fb12808fc69, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,39057,1690009825637; forceNewPlan=false, retain=false 2023-07-22 07:10:30,370 INFO [jenkins-hbase4:37061] balancer.BaseLoadBalancer(1545): Reassigned 5 regions. 5 retained the pre-restart assignment. 2023-07-22 07:10:30,374 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=15 updating hbase:meta row=4416e68564631a31aeee1cd90765b524, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,41787,1690009825478 2023-07-22 07:10:30,374 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=14 updating hbase:meta row=cb5cc297e84ee621ef116b994f44e02c, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,39057,1690009825637 2023-07-22 07:10:30,374 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=16 updating hbase:meta row=c8e1347c16201fa7264dd07badd12c71, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,41787,1690009825478 2023-07-22 07:10:30,375 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=13 updating hbase:meta row=97ad6d84bd0754d12df91fb12808fc69, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,39057,1690009825637 2023-07-22 07:10:30,375 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1690009829529.c8e1347c16201fa7264dd07badd12c71.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690009830374"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690009830374"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690009830374"}]},"ts":"1690009830374"} 2023-07-22 07:10:30,375 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,,1690009829529.97ad6d84bd0754d12df91fb12808fc69.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1690009830375"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690009830375"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690009830375"}]},"ts":"1690009830375"} 2023-07-22 07:10:30,374 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=17 updating hbase:meta row=6e16235d88cb385592fd0b9887a65c2f, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,39057,1690009825637 2023-07-22 07:10:30,376 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1690009829529.6e16235d88cb385592fd0b9887a65c2f.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1690009830374"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690009830374"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690009830374"}]},"ts":"1690009830374"} 2023-07-22 07:10:30,374 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1690009829529.cb5cc297e84ee621ef116b994f44e02c.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690009830374"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690009830374"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690009830374"}]},"ts":"1690009830374"} 2023-07-22 07:10:30,377 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1690009829529.4416e68564631a31aeee1cd90765b524.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690009830374"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690009830374"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690009830374"}]},"ts":"1690009830374"} 2023-07-22 07:10:30,380 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=18, ppid=16, state=RUNNABLE; OpenRegionProcedure c8e1347c16201fa7264dd07badd12c71, server=jenkins-hbase4.apache.org,41787,1690009825478}] 2023-07-22 07:10:30,383 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=19, ppid=13, state=RUNNABLE; OpenRegionProcedure 97ad6d84bd0754d12df91fb12808fc69, server=jenkins-hbase4.apache.org,39057,1690009825637}] 2023-07-22 07:10:30,386 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=20, ppid=17, state=RUNNABLE; OpenRegionProcedure 6e16235d88cb385592fd0b9887a65c2f, server=jenkins-hbase4.apache.org,39057,1690009825637}] 2023-07-22 07:10:30,389 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=21, ppid=14, state=RUNNABLE; OpenRegionProcedure cb5cc297e84ee621ef116b994f44e02c, server=jenkins-hbase4.apache.org,39057,1690009825637}] 2023-07-22 07:10:30,391 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=22, ppid=15, state=RUNNABLE; OpenRegionProcedure 4416e68564631a31aeee1cd90765b524, server=jenkins-hbase4.apache.org,41787,1690009825478}] 2023-07-22 07:10:30,541 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690009829529.4416e68564631a31aeee1cd90765b524. 2023-07-22 07:10:30,541 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 4416e68564631a31aeee1cd90765b524, NAME => 'Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690009829529.4416e68564631a31aeee1cd90765b524.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'} 2023-07-22 07:10:30,542 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 4416e68564631a31aeee1cd90765b524 2023-07-22 07:10:30,542 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690009829529.4416e68564631a31aeee1cd90765b524.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-22 07:10:30,542 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 4416e68564631a31aeee1cd90765b524 2023-07-22 07:10:30,542 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 4416e68564631a31aeee1cd90765b524 2023-07-22 07:10:30,542 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,aaaaa,1690009829529.cb5cc297e84ee621ef116b994f44e02c. 2023-07-22 07:10:30,542 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => cb5cc297e84ee621ef116b994f44e02c, NAME => 'Group_testTableMoveTruncateAndDrop,aaaaa,1690009829529.cb5cc297e84ee621ef116b994f44e02c.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'} 2023-07-22 07:10:30,543 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop cb5cc297e84ee621ef116b994f44e02c 2023-07-22 07:10:30,543 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,aaaaa,1690009829529.cb5cc297e84ee621ef116b994f44e02c.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-22 07:10:30,543 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for cb5cc297e84ee621ef116b994f44e02c 2023-07-22 07:10:30,543 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for cb5cc297e84ee621ef116b994f44e02c 2023-07-22 07:10:30,544 INFO [StoreOpener-4416e68564631a31aeee1cd90765b524-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 4416e68564631a31aeee1cd90765b524 2023-07-22 07:10:30,545 INFO [StoreOpener-cb5cc297e84ee621ef116b994f44e02c-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region cb5cc297e84ee621ef116b994f44e02c 2023-07-22 07:10:30,550 DEBUG [StoreOpener-4416e68564631a31aeee1cd90765b524-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/data/default/Group_testTableMoveTruncateAndDrop/4416e68564631a31aeee1cd90765b524/f 2023-07-22 07:10:30,550 DEBUG [StoreOpener-4416e68564631a31aeee1cd90765b524-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/data/default/Group_testTableMoveTruncateAndDrop/4416e68564631a31aeee1cd90765b524/f 2023-07-22 07:10:30,550 INFO [StoreOpener-4416e68564631a31aeee1cd90765b524-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 4416e68564631a31aeee1cd90765b524 columnFamilyName f 2023-07-22 07:10:30,551 INFO [StoreOpener-4416e68564631a31aeee1cd90765b524-1] regionserver.HStore(310): Store=4416e68564631a31aeee1cd90765b524/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-22 07:10:30,551 DEBUG [StoreOpener-cb5cc297e84ee621ef116b994f44e02c-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/data/default/Group_testTableMoveTruncateAndDrop/cb5cc297e84ee621ef116b994f44e02c/f 2023-07-22 07:10:30,553 DEBUG [StoreOpener-cb5cc297e84ee621ef116b994f44e02c-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/data/default/Group_testTableMoveTruncateAndDrop/cb5cc297e84ee621ef116b994f44e02c/f 2023-07-22 07:10:30,553 INFO [StoreOpener-cb5cc297e84ee621ef116b994f44e02c-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region cb5cc297e84ee621ef116b994f44e02c columnFamilyName f 2023-07-22 07:10:30,554 INFO [StoreOpener-cb5cc297e84ee621ef116b994f44e02c-1] regionserver.HStore(310): Store=cb5cc297e84ee621ef116b994f44e02c/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-22 07:10:30,558 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/data/default/Group_testTableMoveTruncateAndDrop/4416e68564631a31aeee1cd90765b524 2023-07-22 07:10:30,560 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/data/default/Group_testTableMoveTruncateAndDrop/4416e68564631a31aeee1cd90765b524 2023-07-22 07:10:30,562 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/data/default/Group_testTableMoveTruncateAndDrop/cb5cc297e84ee621ef116b994f44e02c 2023-07-22 07:10:30,564 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/data/default/Group_testTableMoveTruncateAndDrop/cb5cc297e84ee621ef116b994f44e02c 2023-07-22 07:10:30,565 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 4416e68564631a31aeee1cd90765b524 2023-07-22 07:10:30,571 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/data/default/Group_testTableMoveTruncateAndDrop/4416e68564631a31aeee1cd90765b524/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-22 07:10:30,572 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for cb5cc297e84ee621ef116b994f44e02c 2023-07-22 07:10:30,572 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 4416e68564631a31aeee1cd90765b524; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9857594880, jitterRate=-0.0819399356842041}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-22 07:10:30,572 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 4416e68564631a31aeee1cd90765b524: 2023-07-22 07:10:30,573 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690009829529.4416e68564631a31aeee1cd90765b524., pid=22, masterSystemTime=1690009830535 2023-07-22 07:10:30,575 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/data/default/Group_testTableMoveTruncateAndDrop/cb5cc297e84ee621ef116b994f44e02c/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-22 07:10:30,576 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened cb5cc297e84ee621ef116b994f44e02c; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10796143360, jitterRate=0.005469202995300293}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-22 07:10:30,576 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for cb5cc297e84ee621ef116b994f44e02c: 2023-07-22 07:10:30,576 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690009829529.4416e68564631a31aeee1cd90765b524. 2023-07-22 07:10:30,576 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690009829529.4416e68564631a31aeee1cd90765b524. 2023-07-22 07:10:30,577 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690009829529.c8e1347c16201fa7264dd07badd12c71. 2023-07-22 07:10:30,577 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => c8e1347c16201fa7264dd07badd12c71, NAME => 'Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690009829529.c8e1347c16201fa7264dd07badd12c71.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'} 2023-07-22 07:10:30,578 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=15 updating hbase:meta row=4416e68564631a31aeee1cd90765b524, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,41787,1690009825478 2023-07-22 07:10:30,578 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop c8e1347c16201fa7264dd07badd12c71 2023-07-22 07:10:30,578 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690009829529.c8e1347c16201fa7264dd07badd12c71.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-22 07:10:30,578 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for c8e1347c16201fa7264dd07badd12c71 2023-07-22 07:10:30,578 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for c8e1347c16201fa7264dd07badd12c71 2023-07-22 07:10:30,578 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1690009829529.4416e68564631a31aeee1cd90765b524.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690009830578"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690009830578"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690009830578"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690009830578"}]},"ts":"1690009830578"} 2023-07-22 07:10:30,579 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,aaaaa,1690009829529.cb5cc297e84ee621ef116b994f44e02c., pid=21, masterSystemTime=1690009830538 2023-07-22 07:10:30,581 INFO [StoreOpener-c8e1347c16201fa7264dd07badd12c71-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region c8e1347c16201fa7264dd07badd12c71 2023-07-22 07:10:30,585 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,aaaaa,1690009829529.cb5cc297e84ee621ef116b994f44e02c. 2023-07-22 07:10:30,585 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,aaaaa,1690009829529.cb5cc297e84ee621ef116b994f44e02c. 2023-07-22 07:10:30,585 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,zzzzz,1690009829529.6e16235d88cb385592fd0b9887a65c2f. 2023-07-22 07:10:30,586 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 6e16235d88cb385592fd0b9887a65c2f, NAME => 'Group_testTableMoveTruncateAndDrop,zzzzz,1690009829529.6e16235d88cb385592fd0b9887a65c2f.', STARTKEY => 'zzzzz', ENDKEY => ''} 2023-07-22 07:10:30,586 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 6e16235d88cb385592fd0b9887a65c2f 2023-07-22 07:10:30,586 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,zzzzz,1690009829529.6e16235d88cb385592fd0b9887a65c2f.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-22 07:10:30,586 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 6e16235d88cb385592fd0b9887a65c2f 2023-07-22 07:10:30,586 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 6e16235d88cb385592fd0b9887a65c2f 2023-07-22 07:10:30,587 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=14 updating hbase:meta row=cb5cc297e84ee621ef116b994f44e02c, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,39057,1690009825637 2023-07-22 07:10:30,588 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1690009829529.cb5cc297e84ee621ef116b994f44e02c.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690009830587"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690009830587"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690009830587"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690009830587"}]},"ts":"1690009830587"} 2023-07-22 07:10:30,588 DEBUG [StoreOpener-c8e1347c16201fa7264dd07badd12c71-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/data/default/Group_testTableMoveTruncateAndDrop/c8e1347c16201fa7264dd07badd12c71/f 2023-07-22 07:10:30,588 DEBUG [StoreOpener-c8e1347c16201fa7264dd07badd12c71-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/data/default/Group_testTableMoveTruncateAndDrop/c8e1347c16201fa7264dd07badd12c71/f 2023-07-22 07:10:30,589 INFO [StoreOpener-6e16235d88cb385592fd0b9887a65c2f-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 6e16235d88cb385592fd0b9887a65c2f 2023-07-22 07:10:30,590 INFO [StoreOpener-c8e1347c16201fa7264dd07badd12c71-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region c8e1347c16201fa7264dd07badd12c71 columnFamilyName f 2023-07-22 07:10:30,592 INFO [StoreOpener-c8e1347c16201fa7264dd07badd12c71-1] regionserver.HStore(310): Store=c8e1347c16201fa7264dd07badd12c71/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-22 07:10:30,592 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=22, resume processing ppid=15 2023-07-22 07:10:30,593 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=22, ppid=15, state=SUCCESS; OpenRegionProcedure 4416e68564631a31aeee1cd90765b524, server=jenkins-hbase4.apache.org,41787,1690009825478 in 196 msec 2023-07-22 07:10:30,594 DEBUG [StoreOpener-6e16235d88cb385592fd0b9887a65c2f-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/data/default/Group_testTableMoveTruncateAndDrop/6e16235d88cb385592fd0b9887a65c2f/f 2023-07-22 07:10:30,594 DEBUG [StoreOpener-6e16235d88cb385592fd0b9887a65c2f-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/data/default/Group_testTableMoveTruncateAndDrop/6e16235d88cb385592fd0b9887a65c2f/f 2023-07-22 07:10:30,595 INFO [StoreOpener-6e16235d88cb385592fd0b9887a65c2f-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 6e16235d88cb385592fd0b9887a65c2f columnFamilyName f 2023-07-22 07:10:30,595 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/data/default/Group_testTableMoveTruncateAndDrop/c8e1347c16201fa7264dd07badd12c71 2023-07-22 07:10:30,596 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/data/default/Group_testTableMoveTruncateAndDrop/c8e1347c16201fa7264dd07badd12c71 2023-07-22 07:10:30,596 INFO [StoreOpener-6e16235d88cb385592fd0b9887a65c2f-1] regionserver.HStore(310): Store=6e16235d88cb385592fd0b9887a65c2f/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-22 07:10:30,597 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=15, ppid=12, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=4416e68564631a31aeee1cd90765b524, ASSIGN in 382 msec 2023-07-22 07:10:30,598 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=21, resume processing ppid=14 2023-07-22 07:10:30,598 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=21, ppid=14, state=SUCCESS; OpenRegionProcedure cb5cc297e84ee621ef116b994f44e02c, server=jenkins-hbase4.apache.org,39057,1690009825637 in 203 msec 2023-07-22 07:10:30,601 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/data/default/Group_testTableMoveTruncateAndDrop/6e16235d88cb385592fd0b9887a65c2f 2023-07-22 07:10:30,602 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/data/default/Group_testTableMoveTruncateAndDrop/6e16235d88cb385592fd0b9887a65c2f 2023-07-22 07:10:30,603 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=14, ppid=12, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=cb5cc297e84ee621ef116b994f44e02c, ASSIGN in 387 msec 2023-07-22 07:10:30,604 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for c8e1347c16201fa7264dd07badd12c71 2023-07-22 07:10:30,607 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 6e16235d88cb385592fd0b9887a65c2f 2023-07-22 07:10:30,608 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/data/default/Group_testTableMoveTruncateAndDrop/c8e1347c16201fa7264dd07badd12c71/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-22 07:10:30,610 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened c8e1347c16201fa7264dd07badd12c71; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10927825280, jitterRate=0.01773303747177124}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-22 07:10:30,611 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for c8e1347c16201fa7264dd07badd12c71: 2023-07-22 07:10:30,617 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690009829529.c8e1347c16201fa7264dd07badd12c71., pid=18, masterSystemTime=1690009830535 2023-07-22 07:10:30,619 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/data/default/Group_testTableMoveTruncateAndDrop/6e16235d88cb385592fd0b9887a65c2f/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-22 07:10:30,620 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 6e16235d88cb385592fd0b9887a65c2f; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11882052480, jitterRate=0.10660237073898315}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-22 07:10:30,620 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 6e16235d88cb385592fd0b9887a65c2f: 2023-07-22 07:10:30,627 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690009829529.c8e1347c16201fa7264dd07badd12c71. 2023-07-22 07:10:30,627 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690009829529.c8e1347c16201fa7264dd07badd12c71. 2023-07-22 07:10:30,628 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,zzzzz,1690009829529.6e16235d88cb385592fd0b9887a65c2f., pid=20, masterSystemTime=1690009830538 2023-07-22 07:10:30,628 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=16 updating hbase:meta row=c8e1347c16201fa7264dd07badd12c71, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,41787,1690009825478 2023-07-22 07:10:30,628 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1690009829529.c8e1347c16201fa7264dd07badd12c71.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690009830628"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690009830628"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690009830628"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690009830628"}]},"ts":"1690009830628"} 2023-07-22 07:10:30,630 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,zzzzz,1690009829529.6e16235d88cb385592fd0b9887a65c2f. 2023-07-22 07:10:30,630 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,zzzzz,1690009829529.6e16235d88cb385592fd0b9887a65c2f. 2023-07-22 07:10:30,630 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,,1690009829529.97ad6d84bd0754d12df91fb12808fc69. 2023-07-22 07:10:30,631 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 97ad6d84bd0754d12df91fb12808fc69, NAME => 'Group_testTableMoveTruncateAndDrop,,1690009829529.97ad6d84bd0754d12df91fb12808fc69.', STARTKEY => '', ENDKEY => 'aaaaa'} 2023-07-22 07:10:30,631 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 97ad6d84bd0754d12df91fb12808fc69 2023-07-22 07:10:30,631 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,,1690009829529.97ad6d84bd0754d12df91fb12808fc69.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-22 07:10:30,631 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 97ad6d84bd0754d12df91fb12808fc69 2023-07-22 07:10:30,631 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 97ad6d84bd0754d12df91fb12808fc69 2023-07-22 07:10:30,632 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=17 updating hbase:meta row=6e16235d88cb385592fd0b9887a65c2f, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,39057,1690009825637 2023-07-22 07:10:30,633 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1690009829529.6e16235d88cb385592fd0b9887a65c2f.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1690009830632"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690009830632"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690009830632"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690009830632"}]},"ts":"1690009830632"} 2023-07-22 07:10:30,636 INFO [StoreOpener-97ad6d84bd0754d12df91fb12808fc69-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 97ad6d84bd0754d12df91fb12808fc69 2023-07-22 07:10:30,639 DEBUG [StoreOpener-97ad6d84bd0754d12df91fb12808fc69-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/data/default/Group_testTableMoveTruncateAndDrop/97ad6d84bd0754d12df91fb12808fc69/f 2023-07-22 07:10:30,639 DEBUG [StoreOpener-97ad6d84bd0754d12df91fb12808fc69-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/data/default/Group_testTableMoveTruncateAndDrop/97ad6d84bd0754d12df91fb12808fc69/f 2023-07-22 07:10:30,639 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=18, resume processing ppid=16 2023-07-22 07:10:30,640 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=18, ppid=16, state=SUCCESS; OpenRegionProcedure c8e1347c16201fa7264dd07badd12c71, server=jenkins-hbase4.apache.org,41787,1690009825478 in 251 msec 2023-07-22 07:10:30,640 INFO [StoreOpener-97ad6d84bd0754d12df91fb12808fc69-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 97ad6d84bd0754d12df91fb12808fc69 columnFamilyName f 2023-07-22 07:10:30,641 INFO [StoreOpener-97ad6d84bd0754d12df91fb12808fc69-1] regionserver.HStore(310): Store=97ad6d84bd0754d12df91fb12808fc69/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-22 07:10:30,642 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=16, ppid=12, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=c8e1347c16201fa7264dd07badd12c71, ASSIGN in 429 msec 2023-07-22 07:10:30,643 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/data/default/Group_testTableMoveTruncateAndDrop/97ad6d84bd0754d12df91fb12808fc69 2023-07-22 07:10:30,644 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/data/default/Group_testTableMoveTruncateAndDrop/97ad6d84bd0754d12df91fb12808fc69 2023-07-22 07:10:30,659 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=20, resume processing ppid=17 2023-07-22 07:10:30,659 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=20, ppid=17, state=SUCCESS; OpenRegionProcedure 6e16235d88cb385592fd0b9887a65c2f, server=jenkins-hbase4.apache.org,39057,1690009825637 in 259 msec 2023-07-22 07:10:30,659 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 97ad6d84bd0754d12df91fb12808fc69 2023-07-22 07:10:30,662 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=17, ppid=12, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=6e16235d88cb385592fd0b9887a65c2f, ASSIGN in 448 msec 2023-07-22 07:10:30,666 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/data/default/Group_testTableMoveTruncateAndDrop/97ad6d84bd0754d12df91fb12808fc69/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-22 07:10:30,667 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 97ad6d84bd0754d12df91fb12808fc69; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10062980480, jitterRate=-0.06281191110610962}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-22 07:10:30,667 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 97ad6d84bd0754d12df91fb12808fc69: 2023-07-22 07:10:30,668 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,,1690009829529.97ad6d84bd0754d12df91fb12808fc69., pid=19, masterSystemTime=1690009830538 2023-07-22 07:10:30,671 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,,1690009829529.97ad6d84bd0754d12df91fb12808fc69. 2023-07-22 07:10:30,671 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,,1690009829529.97ad6d84bd0754d12df91fb12808fc69. 2023-07-22 07:10:30,673 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=13 updating hbase:meta row=97ad6d84bd0754d12df91fb12808fc69, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,39057,1690009825637 2023-07-22 07:10:30,673 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,,1690009829529.97ad6d84bd0754d12df91fb12808fc69.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1690009830672"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690009830672"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690009830672"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690009830672"}]},"ts":"1690009830672"} 2023-07-22 07:10:30,681 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=19, resume processing ppid=13 2023-07-22 07:10:30,681 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=19, ppid=13, state=SUCCESS; OpenRegionProcedure 97ad6d84bd0754d12df91fb12808fc69, server=jenkins-hbase4.apache.org,39057,1690009825637 in 292 msec 2023-07-22 07:10:30,684 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=13, resume processing ppid=12 2023-07-22 07:10:30,685 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=13, ppid=12, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=97ad6d84bd0754d12df91fb12808fc69, ASSIGN in 470 msec 2023-07-22 07:10:30,686 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.MasterRpcServices(1230): Checking to see if procedure is done pid=12 2023-07-22 07:10:30,687 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-22 07:10:30,687 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690009830687"}]},"ts":"1690009830687"} 2023-07-22 07:10:30,691 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=ENABLED in hbase:meta 2023-07-22 07:10:30,694 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop execute state=CREATE_TABLE_POST_OPERATION 2023-07-22 07:10:30,711 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=12, state=SUCCESS; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop in 1.1580 sec 2023-07-22 07:10:31,688 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.MasterRpcServices(1230): Checking to see if procedure is done pid=12 2023-07-22 07:10:31,689 INFO [Listener at localhost/46507] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:Group_testTableMoveTruncateAndDrop, procId: 12 completed 2023-07-22 07:10:31,689 DEBUG [Listener at localhost/46507] hbase.HBaseTestingUtility(3430): Waiting until all regions of table Group_testTableMoveTruncateAndDrop get assigned. Timeout = 60000ms 2023-07-22 07:10:31,690 INFO [Listener at localhost/46507] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-22 07:10:31,697 INFO [Listener at localhost/46507] hbase.HBaseTestingUtility(3484): All regions for table Group_testTableMoveTruncateAndDrop assigned to meta. Checking AM states. 2023-07-22 07:10:31,698 INFO [Listener at localhost/46507] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-22 07:10:31,698 INFO [Listener at localhost/46507] hbase.HBaseTestingUtility(3504): All regions for table Group_testTableMoveTruncateAndDrop assigned. 2023-07-22 07:10:31,699 INFO [Listener at localhost/46507] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-22 07:10:31,705 DEBUG [Listener at localhost/46507] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-22 07:10:31,713 INFO [RS-EventLoopGroup-7-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:48062, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-22 07:10:31,715 DEBUG [Listener at localhost/46507] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-22 07:10:31,719 INFO [RS-EventLoopGroup-3-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:44462, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-22 07:10:31,720 DEBUG [Listener at localhost/46507] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-22 07:10:31,721 INFO [RS-EventLoopGroup-5-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:60298, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-22 07:10:31,723 DEBUG [Listener at localhost/46507] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-22 07:10:31,726 INFO [RS-EventLoopGroup-4-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:59320, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-22 07:10:31,746 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=Group_testTableMoveTruncateAndDrop 2023-07-22 07:10:31,746 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-22 07:10:31,747 INFO [Listener at localhost/46507] rsgroup.TestRSGroupsAdmin1(307): Moving table Group_testTableMoveTruncateAndDrop to Group_testTableMoveTruncateAndDrop_130911739 2023-07-22 07:10:31,759 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [Group_testTableMoveTruncateAndDrop] to rsgroup Group_testTableMoveTruncateAndDrop_130911739 2023-07-22 07:10:31,763 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-22 07:10:31,764 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-22 07:10:31,764 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testTableMoveTruncateAndDrop_130911739 2023-07-22 07:10:31,765 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-22 07:10:31,770 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminServer(339): Moving region(s) for table Group_testTableMoveTruncateAndDrop to RSGroup Group_testTableMoveTruncateAndDrop_130911739 2023-07-22 07:10:31,770 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminServer(345): Moving region 97ad6d84bd0754d12df91fb12808fc69 to RSGroup Group_testTableMoveTruncateAndDrop_130911739 2023-07-22 07:10:31,771 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-22 07:10:31,771 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-22 07:10:31,771 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-22 07:10:31,772 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-22 07:10:31,772 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-22 07:10:31,774 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] procedure2.ProcedureExecutor(1029): Stored pid=23, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=97ad6d84bd0754d12df91fb12808fc69, REOPEN/MOVE 2023-07-22 07:10:31,775 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=23, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=97ad6d84bd0754d12df91fb12808fc69, REOPEN/MOVE 2023-07-22 07:10:31,776 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminServer(345): Moving region cb5cc297e84ee621ef116b994f44e02c to RSGroup Group_testTableMoveTruncateAndDrop_130911739 2023-07-22 07:10:31,776 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-22 07:10:31,776 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-22 07:10:31,776 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-22 07:10:31,776 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-22 07:10:31,776 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-22 07:10:31,777 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=23 updating hbase:meta row=97ad6d84bd0754d12df91fb12808fc69, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,39057,1690009825637 2023-07-22 07:10:31,777 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,,1690009829529.97ad6d84bd0754d12df91fb12808fc69.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1690009831777"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690009831777"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690009831777"}]},"ts":"1690009831777"} 2023-07-22 07:10:31,778 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] procedure2.ProcedureExecutor(1029): Stored pid=24, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=cb5cc297e84ee621ef116b994f44e02c, REOPEN/MOVE 2023-07-22 07:10:31,779 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=24, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=cb5cc297e84ee621ef116b994f44e02c, REOPEN/MOVE 2023-07-22 07:10:31,779 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminServer(345): Moving region 4416e68564631a31aeee1cd90765b524 to RSGroup Group_testTableMoveTruncateAndDrop_130911739 2023-07-22 07:10:31,780 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-22 07:10:31,780 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=24 updating hbase:meta row=cb5cc297e84ee621ef116b994f44e02c, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,39057,1690009825637 2023-07-22 07:10:31,780 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1690009829529.cb5cc297e84ee621ef116b994f44e02c.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690009831780"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690009831780"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690009831780"}]},"ts":"1690009831780"} 2023-07-22 07:10:31,781 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=25, ppid=23, state=RUNNABLE; CloseRegionProcedure 97ad6d84bd0754d12df91fb12808fc69, server=jenkins-hbase4.apache.org,39057,1690009825637}] 2023-07-22 07:10:31,780 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-22 07:10:31,781 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-22 07:10:31,781 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-22 07:10:31,781 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-22 07:10:31,783 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=27, ppid=24, state=RUNNABLE; CloseRegionProcedure cb5cc297e84ee621ef116b994f44e02c, server=jenkins-hbase4.apache.org,39057,1690009825637}] 2023-07-22 07:10:31,787 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] procedure2.ProcedureExecutor(1029): Stored pid=26, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=4416e68564631a31aeee1cd90765b524, REOPEN/MOVE 2023-07-22 07:10:31,787 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminServer(345): Moving region c8e1347c16201fa7264dd07badd12c71 to RSGroup Group_testTableMoveTruncateAndDrop_130911739 2023-07-22 07:10:31,788 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-22 07:10:31,788 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-22 07:10:31,788 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-22 07:10:31,788 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-22 07:10:31,788 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-22 07:10:31,789 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=26, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=4416e68564631a31aeee1cd90765b524, REOPEN/MOVE 2023-07-22 07:10:31,792 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=26 updating hbase:meta row=4416e68564631a31aeee1cd90765b524, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,41787,1690009825478 2023-07-22 07:10:31,792 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1690009829529.4416e68564631a31aeee1cd90765b524.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690009831792"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690009831792"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690009831792"}]},"ts":"1690009831792"} 2023-07-22 07:10:31,793 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] procedure2.ProcedureExecutor(1029): Stored pid=28, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=c8e1347c16201fa7264dd07badd12c71, REOPEN/MOVE 2023-07-22 07:10:31,793 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminServer(345): Moving region 6e16235d88cb385592fd0b9887a65c2f to RSGroup Group_testTableMoveTruncateAndDrop_130911739 2023-07-22 07:10:31,794 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=28, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=c8e1347c16201fa7264dd07badd12c71, REOPEN/MOVE 2023-07-22 07:10:31,794 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-22 07:10:31,794 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-22 07:10:31,794 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-22 07:10:31,794 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-22 07:10:31,794 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-22 07:10:31,796 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=28 updating hbase:meta row=c8e1347c16201fa7264dd07badd12c71, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,41787,1690009825478 2023-07-22 07:10:31,796 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1690009829529.c8e1347c16201fa7264dd07badd12c71.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690009831796"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690009831796"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690009831796"}]},"ts":"1690009831796"} 2023-07-22 07:10:31,797 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=30, ppid=26, state=RUNNABLE; CloseRegionProcedure 4416e68564631a31aeee1cd90765b524, server=jenkins-hbase4.apache.org,41787,1690009825478}] 2023-07-22 07:10:31,798 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] procedure2.ProcedureExecutor(1029): Stored pid=29, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=6e16235d88cb385592fd0b9887a65c2f, REOPEN/MOVE 2023-07-22 07:10:31,798 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminServer(286): Moving 5 region(s) to group Group_testTableMoveTruncateAndDrop_130911739, current retry=0 2023-07-22 07:10:31,799 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=29, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=6e16235d88cb385592fd0b9887a65c2f, REOPEN/MOVE 2023-07-22 07:10:31,800 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=31, ppid=28, state=RUNNABLE; CloseRegionProcedure c8e1347c16201fa7264dd07badd12c71, server=jenkins-hbase4.apache.org,41787,1690009825478}] 2023-07-22 07:10:31,801 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=29 updating hbase:meta row=6e16235d88cb385592fd0b9887a65c2f, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,39057,1690009825637 2023-07-22 07:10:31,801 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1690009829529.6e16235d88cb385592fd0b9887a65c2f.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1690009831801"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690009831801"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690009831801"}]},"ts":"1690009831801"} 2023-07-22 07:10:31,805 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=32, ppid=29, state=RUNNABLE; CloseRegionProcedure 6e16235d88cb385592fd0b9887a65c2f, server=jenkins-hbase4.apache.org,39057,1690009825637}] 2023-07-22 07:10:31,953 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 6e16235d88cb385592fd0b9887a65c2f 2023-07-22 07:10:31,954 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 6e16235d88cb385592fd0b9887a65c2f, disabling compactions & flushes 2023-07-22 07:10:31,955 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,zzzzz,1690009829529.6e16235d88cb385592fd0b9887a65c2f. 2023-07-22 07:10:31,955 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1690009829529.6e16235d88cb385592fd0b9887a65c2f. 2023-07-22 07:10:31,955 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1690009829529.6e16235d88cb385592fd0b9887a65c2f. after waiting 0 ms 2023-07-22 07:10:31,955 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,zzzzz,1690009829529.6e16235d88cb385592fd0b9887a65c2f. 2023-07-22 07:10:31,958 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close c8e1347c16201fa7264dd07badd12c71 2023-07-22 07:10:31,959 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing c8e1347c16201fa7264dd07badd12c71, disabling compactions & flushes 2023-07-22 07:10:31,959 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690009829529.c8e1347c16201fa7264dd07badd12c71. 2023-07-22 07:10:31,959 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690009829529.c8e1347c16201fa7264dd07badd12c71. 2023-07-22 07:10:31,959 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690009829529.c8e1347c16201fa7264dd07badd12c71. after waiting 0 ms 2023-07-22 07:10:31,959 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690009829529.c8e1347c16201fa7264dd07badd12c71. 2023-07-22 07:10:31,968 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/data/default/Group_testTableMoveTruncateAndDrop/c8e1347c16201fa7264dd07badd12c71/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-22 07:10:31,968 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/data/default/Group_testTableMoveTruncateAndDrop/6e16235d88cb385592fd0b9887a65c2f/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-22 07:10:31,969 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,zzzzz,1690009829529.6e16235d88cb385592fd0b9887a65c2f. 2023-07-22 07:10:31,969 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690009829529.c8e1347c16201fa7264dd07badd12c71. 2023-07-22 07:10:31,969 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 6e16235d88cb385592fd0b9887a65c2f: 2023-07-22 07:10:31,969 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for c8e1347c16201fa7264dd07badd12c71: 2023-07-22 07:10:31,969 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 6e16235d88cb385592fd0b9887a65c2f move to jenkins-hbase4.apache.org,34133,1690009825283 record at close sequenceid=2 2023-07-22 07:10:31,969 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding c8e1347c16201fa7264dd07badd12c71 move to jenkins-hbase4.apache.org,34133,1690009825283 record at close sequenceid=2 2023-07-22 07:10:31,972 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed c8e1347c16201fa7264dd07badd12c71 2023-07-22 07:10:31,972 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 4416e68564631a31aeee1cd90765b524 2023-07-22 07:10:31,973 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 4416e68564631a31aeee1cd90765b524, disabling compactions & flushes 2023-07-22 07:10:31,974 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690009829529.4416e68564631a31aeee1cd90765b524. 2023-07-22 07:10:31,974 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690009829529.4416e68564631a31aeee1cd90765b524. 2023-07-22 07:10:31,974 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690009829529.4416e68564631a31aeee1cd90765b524. after waiting 0 ms 2023-07-22 07:10:31,974 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690009829529.4416e68564631a31aeee1cd90765b524. 2023-07-22 07:10:31,976 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=28 updating hbase:meta row=c8e1347c16201fa7264dd07badd12c71, regionState=CLOSED 2023-07-22 07:10:31,977 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 6e16235d88cb385592fd0b9887a65c2f 2023-07-22 07:10:31,977 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1690009829529.c8e1347c16201fa7264dd07badd12c71.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690009831976"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690009831976"}]},"ts":"1690009831976"} 2023-07-22 07:10:31,977 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 97ad6d84bd0754d12df91fb12808fc69 2023-07-22 07:10:31,978 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 97ad6d84bd0754d12df91fb12808fc69, disabling compactions & flushes 2023-07-22 07:10:31,978 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,,1690009829529.97ad6d84bd0754d12df91fb12808fc69. 2023-07-22 07:10:31,978 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,,1690009829529.97ad6d84bd0754d12df91fb12808fc69. 2023-07-22 07:10:31,978 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,,1690009829529.97ad6d84bd0754d12df91fb12808fc69. after waiting 0 ms 2023-07-22 07:10:31,979 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,,1690009829529.97ad6d84bd0754d12df91fb12808fc69. 2023-07-22 07:10:31,984 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=29 updating hbase:meta row=6e16235d88cb385592fd0b9887a65c2f, regionState=CLOSED 2023-07-22 07:10:31,985 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1690009829529.6e16235d88cb385592fd0b9887a65c2f.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1690009831984"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690009831984"}]},"ts":"1690009831984"} 2023-07-22 07:10:31,993 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/data/default/Group_testTableMoveTruncateAndDrop/4416e68564631a31aeee1cd90765b524/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-22 07:10:31,995 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=31, resume processing ppid=28 2023-07-22 07:10:31,995 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=31, ppid=28, state=SUCCESS; CloseRegionProcedure c8e1347c16201fa7264dd07badd12c71, server=jenkins-hbase4.apache.org,41787,1690009825478 in 185 msec 2023-07-22 07:10:31,995 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=32, resume processing ppid=29 2023-07-22 07:10:31,995 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=32, ppid=29, state=SUCCESS; CloseRegionProcedure 6e16235d88cb385592fd0b9887a65c2f, server=jenkins-hbase4.apache.org,39057,1690009825637 in 183 msec 2023-07-22 07:10:31,995 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690009829529.4416e68564631a31aeee1cd90765b524. 2023-07-22 07:10:31,995 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 4416e68564631a31aeee1cd90765b524: 2023-07-22 07:10:31,995 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 4416e68564631a31aeee1cd90765b524 move to jenkins-hbase4.apache.org,33357,1690009829125 record at close sequenceid=2 2023-07-22 07:10:31,997 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=28, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=c8e1347c16201fa7264dd07badd12c71, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,34133,1690009825283; forceNewPlan=false, retain=false 2023-07-22 07:10:31,998 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=29, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=6e16235d88cb385592fd0b9887a65c2f, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,34133,1690009825283; forceNewPlan=false, retain=false 2023-07-22 07:10:32,002 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/data/default/Group_testTableMoveTruncateAndDrop/97ad6d84bd0754d12df91fb12808fc69/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-22 07:10:32,004 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 4416e68564631a31aeee1cd90765b524 2023-07-22 07:10:32,004 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,,1690009829529.97ad6d84bd0754d12df91fb12808fc69. 2023-07-22 07:10:32,004 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 97ad6d84bd0754d12df91fb12808fc69: 2023-07-22 07:10:32,005 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 97ad6d84bd0754d12df91fb12808fc69 move to jenkins-hbase4.apache.org,33357,1690009829125 record at close sequenceid=2 2023-07-22 07:10:32,005 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=26 updating hbase:meta row=4416e68564631a31aeee1cd90765b524, regionState=CLOSED 2023-07-22 07:10:32,005 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1690009829529.4416e68564631a31aeee1cd90765b524.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690009832005"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690009832005"}]},"ts":"1690009832005"} 2023-07-22 07:10:32,007 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 97ad6d84bd0754d12df91fb12808fc69 2023-07-22 07:10:32,007 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close cb5cc297e84ee621ef116b994f44e02c 2023-07-22 07:10:32,008 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing cb5cc297e84ee621ef116b994f44e02c, disabling compactions & flushes 2023-07-22 07:10:32,008 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,aaaaa,1690009829529.cb5cc297e84ee621ef116b994f44e02c. 2023-07-22 07:10:32,008 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1690009829529.cb5cc297e84ee621ef116b994f44e02c. 2023-07-22 07:10:32,008 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1690009829529.cb5cc297e84ee621ef116b994f44e02c. after waiting 0 ms 2023-07-22 07:10:32,008 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,aaaaa,1690009829529.cb5cc297e84ee621ef116b994f44e02c. 2023-07-22 07:10:32,009 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=23 updating hbase:meta row=97ad6d84bd0754d12df91fb12808fc69, regionState=CLOSED 2023-07-22 07:10:32,009 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,,1690009829529.97ad6d84bd0754d12df91fb12808fc69.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1690009832009"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690009832009"}]},"ts":"1690009832009"} 2023-07-22 07:10:32,014 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=30, resume processing ppid=26 2023-07-22 07:10:32,014 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=30, ppid=26, state=SUCCESS; CloseRegionProcedure 4416e68564631a31aeee1cd90765b524, server=jenkins-hbase4.apache.org,41787,1690009825478 in 211 msec 2023-07-22 07:10:32,015 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=26, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=4416e68564631a31aeee1cd90765b524, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,33357,1690009829125; forceNewPlan=false, retain=false 2023-07-22 07:10:32,017 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=25, resume processing ppid=23 2023-07-22 07:10:32,017 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=25, ppid=23, state=SUCCESS; CloseRegionProcedure 97ad6d84bd0754d12df91fb12808fc69, server=jenkins-hbase4.apache.org,39057,1690009825637 in 231 msec 2023-07-22 07:10:32,018 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=23, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=97ad6d84bd0754d12df91fb12808fc69, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,33357,1690009829125; forceNewPlan=false, retain=false 2023-07-22 07:10:32,019 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/data/default/Group_testTableMoveTruncateAndDrop/cb5cc297e84ee621ef116b994f44e02c/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-22 07:10:32,020 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,aaaaa,1690009829529.cb5cc297e84ee621ef116b994f44e02c. 2023-07-22 07:10:32,020 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for cb5cc297e84ee621ef116b994f44e02c: 2023-07-22 07:10:32,020 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding cb5cc297e84ee621ef116b994f44e02c move to jenkins-hbase4.apache.org,34133,1690009825283 record at close sequenceid=2 2023-07-22 07:10:32,022 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed cb5cc297e84ee621ef116b994f44e02c 2023-07-22 07:10:32,022 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=24 updating hbase:meta row=cb5cc297e84ee621ef116b994f44e02c, regionState=CLOSED 2023-07-22 07:10:32,022 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1690009829529.cb5cc297e84ee621ef116b994f44e02c.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690009832022"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690009832022"}]},"ts":"1690009832022"} 2023-07-22 07:10:32,027 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=27, resume processing ppid=24 2023-07-22 07:10:32,027 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=27, ppid=24, state=SUCCESS; CloseRegionProcedure cb5cc297e84ee621ef116b994f44e02c, server=jenkins-hbase4.apache.org,39057,1690009825637 in 242 msec 2023-07-22 07:10:32,028 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=24, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=cb5cc297e84ee621ef116b994f44e02c, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,34133,1690009825283; forceNewPlan=false, retain=false 2023-07-22 07:10:32,147 INFO [jenkins-hbase4:37061] balancer.BaseLoadBalancer(1545): Reassigned 5 regions. 5 retained the pre-restart assignment. 2023-07-22 07:10:32,148 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=29 updating hbase:meta row=6e16235d88cb385592fd0b9887a65c2f, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,34133,1690009825283 2023-07-22 07:10:32,148 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=23 updating hbase:meta row=97ad6d84bd0754d12df91fb12808fc69, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,33357,1690009829125 2023-07-22 07:10:32,148 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=26 updating hbase:meta row=4416e68564631a31aeee1cd90765b524, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,33357,1690009829125 2023-07-22 07:10:32,148 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=24 updating hbase:meta row=cb5cc297e84ee621ef116b994f44e02c, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,34133,1690009825283 2023-07-22 07:10:32,148 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1690009829529.4416e68564631a31aeee1cd90765b524.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690009832148"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690009832148"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690009832148"}]},"ts":"1690009832148"} 2023-07-22 07:10:32,148 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1690009829529.cb5cc297e84ee621ef116b994f44e02c.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690009832148"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690009832148"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690009832148"}]},"ts":"1690009832148"} 2023-07-22 07:10:32,148 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,,1690009829529.97ad6d84bd0754d12df91fb12808fc69.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1690009832148"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690009832148"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690009832148"}]},"ts":"1690009832148"} 2023-07-22 07:10:32,148 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1690009829529.6e16235d88cb385592fd0b9887a65c2f.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1690009832148"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690009832148"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690009832148"}]},"ts":"1690009832148"} 2023-07-22 07:10:32,148 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=28 updating hbase:meta row=c8e1347c16201fa7264dd07badd12c71, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,34133,1690009825283 2023-07-22 07:10:32,149 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1690009829529.c8e1347c16201fa7264dd07badd12c71.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690009832148"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690009832148"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690009832148"}]},"ts":"1690009832148"} 2023-07-22 07:10:32,151 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=33, ppid=26, state=RUNNABLE; OpenRegionProcedure 4416e68564631a31aeee1cd90765b524, server=jenkins-hbase4.apache.org,33357,1690009829125}] 2023-07-22 07:10:32,153 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=34, ppid=24, state=RUNNABLE; OpenRegionProcedure cb5cc297e84ee621ef116b994f44e02c, server=jenkins-hbase4.apache.org,34133,1690009825283}] 2023-07-22 07:10:32,155 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=35, ppid=23, state=RUNNABLE; OpenRegionProcedure 97ad6d84bd0754d12df91fb12808fc69, server=jenkins-hbase4.apache.org,33357,1690009829125}] 2023-07-22 07:10:32,157 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=36, ppid=29, state=RUNNABLE; OpenRegionProcedure 6e16235d88cb385592fd0b9887a65c2f, server=jenkins-hbase4.apache.org,34133,1690009825283}] 2023-07-22 07:10:32,161 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=37, ppid=28, state=RUNNABLE; OpenRegionProcedure c8e1347c16201fa7264dd07badd12c71, server=jenkins-hbase4.apache.org,34133,1690009825283}] 2023-07-22 07:10:32,305 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,33357,1690009829125 2023-07-22 07:10:32,305 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-22 07:10:32,306 DEBUG [RSProcedureDispatcher-pool-1] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,34133,1690009825283 2023-07-22 07:10:32,306 DEBUG [RSProcedureDispatcher-pool-1] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-22 07:10:32,307 INFO [RS-EventLoopGroup-7-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:48070, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-22 07:10:32,310 INFO [RS-EventLoopGroup-3-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:44472, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-22 07:10:32,315 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,zzzzz,1690009829529.6e16235d88cb385592fd0b9887a65c2f. 2023-07-22 07:10:32,316 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,,1690009829529.97ad6d84bd0754d12df91fb12808fc69. 2023-07-22 07:10:32,316 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 6e16235d88cb385592fd0b9887a65c2f, NAME => 'Group_testTableMoveTruncateAndDrop,zzzzz,1690009829529.6e16235d88cb385592fd0b9887a65c2f.', STARTKEY => 'zzzzz', ENDKEY => ''} 2023-07-22 07:10:32,316 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 97ad6d84bd0754d12df91fb12808fc69, NAME => 'Group_testTableMoveTruncateAndDrop,,1690009829529.97ad6d84bd0754d12df91fb12808fc69.', STARTKEY => '', ENDKEY => 'aaaaa'} 2023-07-22 07:10:32,316 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 6e16235d88cb385592fd0b9887a65c2f 2023-07-22 07:10:32,316 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 97ad6d84bd0754d12df91fb12808fc69 2023-07-22 07:10:32,316 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,zzzzz,1690009829529.6e16235d88cb385592fd0b9887a65c2f.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-22 07:10:32,316 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,,1690009829529.97ad6d84bd0754d12df91fb12808fc69.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-22 07:10:32,316 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 6e16235d88cb385592fd0b9887a65c2f 2023-07-22 07:10:32,316 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 97ad6d84bd0754d12df91fb12808fc69 2023-07-22 07:10:32,316 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 6e16235d88cb385592fd0b9887a65c2f 2023-07-22 07:10:32,316 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 97ad6d84bd0754d12df91fb12808fc69 2023-07-22 07:10:32,318 INFO [StoreOpener-6e16235d88cb385592fd0b9887a65c2f-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 6e16235d88cb385592fd0b9887a65c2f 2023-07-22 07:10:32,318 INFO [StoreOpener-97ad6d84bd0754d12df91fb12808fc69-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 97ad6d84bd0754d12df91fb12808fc69 2023-07-22 07:10:32,319 DEBUG [StoreOpener-6e16235d88cb385592fd0b9887a65c2f-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/data/default/Group_testTableMoveTruncateAndDrop/6e16235d88cb385592fd0b9887a65c2f/f 2023-07-22 07:10:32,319 DEBUG [StoreOpener-6e16235d88cb385592fd0b9887a65c2f-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/data/default/Group_testTableMoveTruncateAndDrop/6e16235d88cb385592fd0b9887a65c2f/f 2023-07-22 07:10:32,320 INFO [StoreOpener-6e16235d88cb385592fd0b9887a65c2f-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 6e16235d88cb385592fd0b9887a65c2f columnFamilyName f 2023-07-22 07:10:32,331 INFO [StoreOpener-6e16235d88cb385592fd0b9887a65c2f-1] regionserver.HStore(310): Store=6e16235d88cb385592fd0b9887a65c2f/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-22 07:10:32,331 DEBUG [StoreOpener-97ad6d84bd0754d12df91fb12808fc69-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/data/default/Group_testTableMoveTruncateAndDrop/97ad6d84bd0754d12df91fb12808fc69/f 2023-07-22 07:10:32,332 DEBUG [StoreOpener-97ad6d84bd0754d12df91fb12808fc69-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/data/default/Group_testTableMoveTruncateAndDrop/97ad6d84bd0754d12df91fb12808fc69/f 2023-07-22 07:10:32,333 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/data/default/Group_testTableMoveTruncateAndDrop/6e16235d88cb385592fd0b9887a65c2f 2023-07-22 07:10:32,334 INFO [StoreOpener-97ad6d84bd0754d12df91fb12808fc69-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 97ad6d84bd0754d12df91fb12808fc69 columnFamilyName f 2023-07-22 07:10:32,335 INFO [StoreOpener-97ad6d84bd0754d12df91fb12808fc69-1] regionserver.HStore(310): Store=97ad6d84bd0754d12df91fb12808fc69/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-22 07:10:32,336 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/data/default/Group_testTableMoveTruncateAndDrop/6e16235d88cb385592fd0b9887a65c2f 2023-07-22 07:10:32,339 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/data/default/Group_testTableMoveTruncateAndDrop/97ad6d84bd0754d12df91fb12808fc69 2023-07-22 07:10:32,341 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/data/default/Group_testTableMoveTruncateAndDrop/97ad6d84bd0754d12df91fb12808fc69 2023-07-22 07:10:32,341 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 6e16235d88cb385592fd0b9887a65c2f 2023-07-22 07:10:32,343 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 6e16235d88cb385592fd0b9887a65c2f; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10903027200, jitterRate=0.01542353630065918}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-22 07:10:32,343 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 6e16235d88cb385592fd0b9887a65c2f: 2023-07-22 07:10:32,344 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,zzzzz,1690009829529.6e16235d88cb385592fd0b9887a65c2f., pid=36, masterSystemTime=1690009832306 2023-07-22 07:10:32,350 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 97ad6d84bd0754d12df91fb12808fc69 2023-07-22 07:10:32,351 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,zzzzz,1690009829529.6e16235d88cb385592fd0b9887a65c2f. 2023-07-22 07:10:32,351 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,zzzzz,1690009829529.6e16235d88cb385592fd0b9887a65c2f. 2023-07-22 07:10:32,352 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,aaaaa,1690009829529.cb5cc297e84ee621ef116b994f44e02c. 2023-07-22 07:10:32,352 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => cb5cc297e84ee621ef116b994f44e02c, NAME => 'Group_testTableMoveTruncateAndDrop,aaaaa,1690009829529.cb5cc297e84ee621ef116b994f44e02c.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'} 2023-07-22 07:10:32,352 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop cb5cc297e84ee621ef116b994f44e02c 2023-07-22 07:10:32,352 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,aaaaa,1690009829529.cb5cc297e84ee621ef116b994f44e02c.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-22 07:10:32,352 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for cb5cc297e84ee621ef116b994f44e02c 2023-07-22 07:10:32,352 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for cb5cc297e84ee621ef116b994f44e02c 2023-07-22 07:10:32,353 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=29 updating hbase:meta row=6e16235d88cb385592fd0b9887a65c2f, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,34133,1690009825283 2023-07-22 07:10:32,353 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1690009829529.6e16235d88cb385592fd0b9887a65c2f.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1690009832353"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690009832353"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690009832353"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690009832353"}]},"ts":"1690009832353"} 2023-07-22 07:10:32,353 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 97ad6d84bd0754d12df91fb12808fc69; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10693712480, jitterRate=-0.00407041609287262}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-22 07:10:32,353 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 97ad6d84bd0754d12df91fb12808fc69: 2023-07-22 07:10:32,355 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,,1690009829529.97ad6d84bd0754d12df91fb12808fc69., pid=35, masterSystemTime=1690009832305 2023-07-22 07:10:32,360 INFO [StoreOpener-cb5cc297e84ee621ef116b994f44e02c-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region cb5cc297e84ee621ef116b994f44e02c 2023-07-22 07:10:32,363 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=36, resume processing ppid=29 2023-07-22 07:10:32,363 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=36, ppid=29, state=SUCCESS; OpenRegionProcedure 6e16235d88cb385592fd0b9887a65c2f, server=jenkins-hbase4.apache.org,34133,1690009825283 in 201 msec 2023-07-22 07:10:32,363 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,,1690009829529.97ad6d84bd0754d12df91fb12808fc69. 2023-07-22 07:10:32,364 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,,1690009829529.97ad6d84bd0754d12df91fb12808fc69. 2023-07-22 07:10:32,364 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690009829529.4416e68564631a31aeee1cd90765b524. 2023-07-22 07:10:32,364 DEBUG [StoreOpener-cb5cc297e84ee621ef116b994f44e02c-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/data/default/Group_testTableMoveTruncateAndDrop/cb5cc297e84ee621ef116b994f44e02c/f 2023-07-22 07:10:32,364 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 4416e68564631a31aeee1cd90765b524, NAME => 'Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690009829529.4416e68564631a31aeee1cd90765b524.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'} 2023-07-22 07:10:32,365 DEBUG [StoreOpener-cb5cc297e84ee621ef116b994f44e02c-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/data/default/Group_testTableMoveTruncateAndDrop/cb5cc297e84ee621ef116b994f44e02c/f 2023-07-22 07:10:32,365 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 4416e68564631a31aeee1cd90765b524 2023-07-22 07:10:32,365 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690009829529.4416e68564631a31aeee1cd90765b524.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-22 07:10:32,365 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=23 updating hbase:meta row=97ad6d84bd0754d12df91fb12808fc69, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,33357,1690009829125 2023-07-22 07:10:32,365 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 4416e68564631a31aeee1cd90765b524 2023-07-22 07:10:32,365 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 4416e68564631a31aeee1cd90765b524 2023-07-22 07:10:32,365 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,,1690009829529.97ad6d84bd0754d12df91fb12808fc69.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1690009832365"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690009832365"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690009832365"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690009832365"}]},"ts":"1690009832365"} 2023-07-22 07:10:32,366 INFO [StoreOpener-cb5cc297e84ee621ef116b994f44e02c-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region cb5cc297e84ee621ef116b994f44e02c columnFamilyName f 2023-07-22 07:10:32,367 INFO [StoreOpener-cb5cc297e84ee621ef116b994f44e02c-1] regionserver.HStore(310): Store=cb5cc297e84ee621ef116b994f44e02c/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-22 07:10:32,368 INFO [StoreOpener-4416e68564631a31aeee1cd90765b524-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 4416e68564631a31aeee1cd90765b524 2023-07-22 07:10:32,368 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=29, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=6e16235d88cb385592fd0b9887a65c2f, REOPEN/MOVE in 569 msec 2023-07-22 07:10:32,369 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/data/default/Group_testTableMoveTruncateAndDrop/cb5cc297e84ee621ef116b994f44e02c 2023-07-22 07:10:32,370 DEBUG [StoreOpener-4416e68564631a31aeee1cd90765b524-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/data/default/Group_testTableMoveTruncateAndDrop/4416e68564631a31aeee1cd90765b524/f 2023-07-22 07:10:32,370 DEBUG [StoreOpener-4416e68564631a31aeee1cd90765b524-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/data/default/Group_testTableMoveTruncateAndDrop/4416e68564631a31aeee1cd90765b524/f 2023-07-22 07:10:32,371 INFO [StoreOpener-4416e68564631a31aeee1cd90765b524-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 4416e68564631a31aeee1cd90765b524 columnFamilyName f 2023-07-22 07:10:32,371 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/data/default/Group_testTableMoveTruncateAndDrop/cb5cc297e84ee621ef116b994f44e02c 2023-07-22 07:10:32,372 INFO [StoreOpener-4416e68564631a31aeee1cd90765b524-1] regionserver.HStore(310): Store=4416e68564631a31aeee1cd90765b524/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-22 07:10:32,375 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=35, resume processing ppid=23 2023-07-22 07:10:32,375 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=35, ppid=23, state=SUCCESS; OpenRegionProcedure 97ad6d84bd0754d12df91fb12808fc69, server=jenkins-hbase4.apache.org,33357,1690009829125 in 214 msec 2023-07-22 07:10:32,375 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/data/default/Group_testTableMoveTruncateAndDrop/4416e68564631a31aeee1cd90765b524 2023-07-22 07:10:32,377 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/data/default/Group_testTableMoveTruncateAndDrop/4416e68564631a31aeee1cd90765b524 2023-07-22 07:10:32,377 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=23, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=97ad6d84bd0754d12df91fb12808fc69, REOPEN/MOVE in 602 msec 2023-07-22 07:10:32,378 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for cb5cc297e84ee621ef116b994f44e02c 2023-07-22 07:10:32,379 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened cb5cc297e84ee621ef116b994f44e02c; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9868970400, jitterRate=-0.08088050782680511}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-22 07:10:32,379 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for cb5cc297e84ee621ef116b994f44e02c: 2023-07-22 07:10:32,380 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,aaaaa,1690009829529.cb5cc297e84ee621ef116b994f44e02c., pid=34, masterSystemTime=1690009832306 2023-07-22 07:10:32,382 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,aaaaa,1690009829529.cb5cc297e84ee621ef116b994f44e02c. 2023-07-22 07:10:32,382 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,aaaaa,1690009829529.cb5cc297e84ee621ef116b994f44e02c. 2023-07-22 07:10:32,382 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690009829529.c8e1347c16201fa7264dd07badd12c71. 2023-07-22 07:10:32,382 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => c8e1347c16201fa7264dd07badd12c71, NAME => 'Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690009829529.c8e1347c16201fa7264dd07badd12c71.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'} 2023-07-22 07:10:32,383 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop c8e1347c16201fa7264dd07badd12c71 2023-07-22 07:10:32,383 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690009829529.c8e1347c16201fa7264dd07badd12c71.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-22 07:10:32,383 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for c8e1347c16201fa7264dd07badd12c71 2023-07-22 07:10:32,383 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=24 updating hbase:meta row=cb5cc297e84ee621ef116b994f44e02c, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,34133,1690009825283 2023-07-22 07:10:32,383 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for c8e1347c16201fa7264dd07badd12c71 2023-07-22 07:10:32,383 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1690009829529.cb5cc297e84ee621ef116b994f44e02c.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690009832383"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690009832383"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690009832383"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690009832383"}]},"ts":"1690009832383"} 2023-07-22 07:10:32,383 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 4416e68564631a31aeee1cd90765b524 2023-07-22 07:10:32,385 INFO [StoreOpener-c8e1347c16201fa7264dd07badd12c71-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region c8e1347c16201fa7264dd07badd12c71 2023-07-22 07:10:32,386 DEBUG [StoreOpener-c8e1347c16201fa7264dd07badd12c71-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/data/default/Group_testTableMoveTruncateAndDrop/c8e1347c16201fa7264dd07badd12c71/f 2023-07-22 07:10:32,386 DEBUG [StoreOpener-c8e1347c16201fa7264dd07badd12c71-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/data/default/Group_testTableMoveTruncateAndDrop/c8e1347c16201fa7264dd07badd12c71/f 2023-07-22 07:10:32,387 INFO [StoreOpener-c8e1347c16201fa7264dd07badd12c71-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region c8e1347c16201fa7264dd07badd12c71 columnFamilyName f 2023-07-22 07:10:32,388 INFO [StoreOpener-c8e1347c16201fa7264dd07badd12c71-1] regionserver.HStore(310): Store=c8e1347c16201fa7264dd07badd12c71/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-22 07:10:32,389 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/data/default/Group_testTableMoveTruncateAndDrop/c8e1347c16201fa7264dd07badd12c71 2023-07-22 07:10:32,391 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/data/default/Group_testTableMoveTruncateAndDrop/c8e1347c16201fa7264dd07badd12c71 2023-07-22 07:10:32,391 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 4416e68564631a31aeee1cd90765b524; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11027179840, jitterRate=0.026986151933670044}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-22 07:10:32,392 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 4416e68564631a31aeee1cd90765b524: 2023-07-22 07:10:32,396 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690009829529.4416e68564631a31aeee1cd90765b524., pid=33, masterSystemTime=1690009832305 2023-07-22 07:10:32,396 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=34, resume processing ppid=24 2023-07-22 07:10:32,396 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=34, ppid=24, state=SUCCESS; OpenRegionProcedure cb5cc297e84ee621ef116b994f44e02c, server=jenkins-hbase4.apache.org,34133,1690009825283 in 234 msec 2023-07-22 07:10:32,396 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for c8e1347c16201fa7264dd07badd12c71 2023-07-22 07:10:32,398 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened c8e1347c16201fa7264dd07badd12c71; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11763955840, jitterRate=0.09560376405715942}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-22 07:10:32,398 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for c8e1347c16201fa7264dd07badd12c71: 2023-07-22 07:10:32,399 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690009829529.4416e68564631a31aeee1cd90765b524. 2023-07-22 07:10:32,399 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690009829529.4416e68564631a31aeee1cd90765b524. 2023-07-22 07:10:32,400 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690009829529.c8e1347c16201fa7264dd07badd12c71., pid=37, masterSystemTime=1690009832306 2023-07-22 07:10:32,401 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=24, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=cb5cc297e84ee621ef116b994f44e02c, REOPEN/MOVE in 620 msec 2023-07-22 07:10:32,401 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=26 updating hbase:meta row=4416e68564631a31aeee1cd90765b524, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,33357,1690009829125 2023-07-22 07:10:32,401 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1690009829529.4416e68564631a31aeee1cd90765b524.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690009832401"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690009832401"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690009832401"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690009832401"}]},"ts":"1690009832401"} 2023-07-22 07:10:32,404 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690009829529.c8e1347c16201fa7264dd07badd12c71. 2023-07-22 07:10:32,404 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=28 updating hbase:meta row=c8e1347c16201fa7264dd07badd12c71, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,34133,1690009825283 2023-07-22 07:10:32,405 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690009829529.c8e1347c16201fa7264dd07badd12c71. 2023-07-22 07:10:32,405 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1690009829529.c8e1347c16201fa7264dd07badd12c71.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690009832404"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690009832404"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690009832404"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690009832404"}]},"ts":"1690009832404"} 2023-07-22 07:10:32,411 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=33, resume processing ppid=26 2023-07-22 07:10:32,411 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=33, ppid=26, state=SUCCESS; OpenRegionProcedure 4416e68564631a31aeee1cd90765b524, server=jenkins-hbase4.apache.org,33357,1690009829125 in 254 msec 2023-07-22 07:10:32,416 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=26, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=4416e68564631a31aeee1cd90765b524, REOPEN/MOVE in 630 msec 2023-07-22 07:10:32,417 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=37, resume processing ppid=28 2023-07-22 07:10:32,417 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=37, ppid=28, state=SUCCESS; OpenRegionProcedure c8e1347c16201fa7264dd07badd12c71, server=jenkins-hbase4.apache.org,34133,1690009825283 in 249 msec 2023-07-22 07:10:32,423 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=28, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=c8e1347c16201fa7264dd07badd12c71, REOPEN/MOVE in 629 msec 2023-07-22 07:10:32,800 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] procedure.ProcedureSyncWait(216): waitFor pid=23 2023-07-22 07:10:32,800 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminServer(369): All regions from table(s) [Group_testTableMoveTruncateAndDrop] moved to target group Group_testTableMoveTruncateAndDrop_130911739. 2023-07-22 07:10:32,800 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-22 07:10:32,805 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-22 07:10:32,806 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-22 07:10:32,809 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=Group_testTableMoveTruncateAndDrop 2023-07-22 07:10:32,809 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-22 07:10:32,810 INFO [Listener at localhost/46507] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-22 07:10:32,816 INFO [Listener at localhost/46507] client.HBaseAdmin$15(890): Started disable of Group_testTableMoveTruncateAndDrop 2023-07-22 07:10:32,821 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable Group_testTableMoveTruncateAndDrop 2023-07-22 07:10:32,830 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] procedure2.ProcedureExecutor(1029): Stored pid=38, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-22 07:10:32,838 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690009832837"}]},"ts":"1690009832837"} 2023-07-22 07:10:32,840 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=DISABLING in hbase:meta 2023-07-22 07:10:32,842 INFO [PEWorker-2] procedure.DisableTableProcedure(293): Set Group_testTableMoveTruncateAndDrop to state=DISABLING 2023-07-22 07:10:32,845 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37061] master.MasterRpcServices(1230): Checking to see if procedure is done pid=38 2023-07-22 07:10:32,846 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=39, ppid=38, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=97ad6d84bd0754d12df91fb12808fc69, UNASSIGN}, {pid=40, ppid=38, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=cb5cc297e84ee621ef116b994f44e02c, UNASSIGN}, {pid=41, ppid=38, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=4416e68564631a31aeee1cd90765b524, UNASSIGN}, {pid=42, ppid=38, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=c8e1347c16201fa7264dd07badd12c71, UNASSIGN}, {pid=43, ppid=38, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=6e16235d88cb385592fd0b9887a65c2f, UNASSIGN}] 2023-07-22 07:10:32,847 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=43, ppid=38, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=6e16235d88cb385592fd0b9887a65c2f, UNASSIGN 2023-07-22 07:10:32,850 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=42, ppid=38, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=c8e1347c16201fa7264dd07badd12c71, UNASSIGN 2023-07-22 07:10:32,850 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=41, ppid=38, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=4416e68564631a31aeee1cd90765b524, UNASSIGN 2023-07-22 07:10:32,851 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=40, ppid=38, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=cb5cc297e84ee621ef116b994f44e02c, UNASSIGN 2023-07-22 07:10:32,851 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=39, ppid=38, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=97ad6d84bd0754d12df91fb12808fc69, UNASSIGN 2023-07-22 07:10:32,852 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=43 updating hbase:meta row=6e16235d88cb385592fd0b9887a65c2f, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,34133,1690009825283 2023-07-22 07:10:32,852 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1690009829529.6e16235d88cb385592fd0b9887a65c2f.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1690009832851"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690009832851"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690009832851"}]},"ts":"1690009832851"} 2023-07-22 07:10:32,854 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=42 updating hbase:meta row=c8e1347c16201fa7264dd07badd12c71, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,34133,1690009825283 2023-07-22 07:10:32,854 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=40 updating hbase:meta row=cb5cc297e84ee621ef116b994f44e02c, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,34133,1690009825283 2023-07-22 07:10:32,854 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=39 updating hbase:meta row=97ad6d84bd0754d12df91fb12808fc69, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,33357,1690009829125 2023-07-22 07:10:32,854 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1690009829529.cb5cc297e84ee621ef116b994f44e02c.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690009832854"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690009832854"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690009832854"}]},"ts":"1690009832854"} 2023-07-22 07:10:32,854 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,,1690009829529.97ad6d84bd0754d12df91fb12808fc69.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1690009832854"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690009832854"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690009832854"}]},"ts":"1690009832854"} 2023-07-22 07:10:32,854 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1690009829529.c8e1347c16201fa7264dd07badd12c71.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690009832854"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690009832854"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690009832854"}]},"ts":"1690009832854"} 2023-07-22 07:10:32,854 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=41 updating hbase:meta row=4416e68564631a31aeee1cd90765b524, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,33357,1690009829125 2023-07-22 07:10:32,856 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1690009829529.4416e68564631a31aeee1cd90765b524.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690009832854"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690009832854"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690009832854"}]},"ts":"1690009832854"} 2023-07-22 07:10:32,857 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=44, ppid=43, state=RUNNABLE; CloseRegionProcedure 6e16235d88cb385592fd0b9887a65c2f, server=jenkins-hbase4.apache.org,34133,1690009825283}] 2023-07-22 07:10:32,860 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=45, ppid=40, state=RUNNABLE; CloseRegionProcedure cb5cc297e84ee621ef116b994f44e02c, server=jenkins-hbase4.apache.org,34133,1690009825283}] 2023-07-22 07:10:32,861 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=46, ppid=39, state=RUNNABLE; CloseRegionProcedure 97ad6d84bd0754d12df91fb12808fc69, server=jenkins-hbase4.apache.org,33357,1690009829125}] 2023-07-22 07:10:32,864 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=47, ppid=42, state=RUNNABLE; CloseRegionProcedure c8e1347c16201fa7264dd07badd12c71, server=jenkins-hbase4.apache.org,34133,1690009825283}] 2023-07-22 07:10:32,865 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=48, ppid=41, state=RUNNABLE; CloseRegionProcedure 4416e68564631a31aeee1cd90765b524, server=jenkins-hbase4.apache.org,33357,1690009829125}] 2023-07-22 07:10:32,947 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.MasterRpcServices(1230): Checking to see if procedure is done pid=38 2023-07-22 07:10:33,013 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close cb5cc297e84ee621ef116b994f44e02c 2023-07-22 07:10:33,014 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing cb5cc297e84ee621ef116b994f44e02c, disabling compactions & flushes 2023-07-22 07:10:33,015 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,aaaaa,1690009829529.cb5cc297e84ee621ef116b994f44e02c. 2023-07-22 07:10:33,015 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1690009829529.cb5cc297e84ee621ef116b994f44e02c. 2023-07-22 07:10:33,015 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1690009829529.cb5cc297e84ee621ef116b994f44e02c. after waiting 0 ms 2023-07-22 07:10:33,015 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,aaaaa,1690009829529.cb5cc297e84ee621ef116b994f44e02c. 2023-07-22 07:10:33,019 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 97ad6d84bd0754d12df91fb12808fc69 2023-07-22 07:10:33,020 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 97ad6d84bd0754d12df91fb12808fc69, disabling compactions & flushes 2023-07-22 07:10:33,020 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,,1690009829529.97ad6d84bd0754d12df91fb12808fc69. 2023-07-22 07:10:33,020 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,,1690009829529.97ad6d84bd0754d12df91fb12808fc69. 2023-07-22 07:10:33,020 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,,1690009829529.97ad6d84bd0754d12df91fb12808fc69. after waiting 0 ms 2023-07-22 07:10:33,020 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,,1690009829529.97ad6d84bd0754d12df91fb12808fc69. 2023-07-22 07:10:33,028 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/data/default/Group_testTableMoveTruncateAndDrop/cb5cc297e84ee621ef116b994f44e02c/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-22 07:10:33,029 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,aaaaa,1690009829529.cb5cc297e84ee621ef116b994f44e02c. 2023-07-22 07:10:33,029 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for cb5cc297e84ee621ef116b994f44e02c: 2023-07-22 07:10:33,032 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed cb5cc297e84ee621ef116b994f44e02c 2023-07-22 07:10:33,032 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 6e16235d88cb385592fd0b9887a65c2f 2023-07-22 07:10:33,034 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 6e16235d88cb385592fd0b9887a65c2f, disabling compactions & flushes 2023-07-22 07:10:33,034 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,zzzzz,1690009829529.6e16235d88cb385592fd0b9887a65c2f. 2023-07-22 07:10:33,034 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1690009829529.6e16235d88cb385592fd0b9887a65c2f. 2023-07-22 07:10:33,034 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1690009829529.6e16235d88cb385592fd0b9887a65c2f. after waiting 0 ms 2023-07-22 07:10:33,034 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,zzzzz,1690009829529.6e16235d88cb385592fd0b9887a65c2f. 2023-07-22 07:10:33,040 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=40 updating hbase:meta row=cb5cc297e84ee621ef116b994f44e02c, regionState=CLOSED 2023-07-22 07:10:33,040 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1690009829529.cb5cc297e84ee621ef116b994f44e02c.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690009833040"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690009833040"}]},"ts":"1690009833040"} 2023-07-22 07:10:33,041 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/data/default/Group_testTableMoveTruncateAndDrop/97ad6d84bd0754d12df91fb12808fc69/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-22 07:10:33,042 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,,1690009829529.97ad6d84bd0754d12df91fb12808fc69. 2023-07-22 07:10:33,042 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 97ad6d84bd0754d12df91fb12808fc69: 2023-07-22 07:10:33,045 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 97ad6d84bd0754d12df91fb12808fc69 2023-07-22 07:10:33,045 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 4416e68564631a31aeee1cd90765b524 2023-07-22 07:10:33,046 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 4416e68564631a31aeee1cd90765b524, disabling compactions & flushes 2023-07-22 07:10:33,046 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690009829529.4416e68564631a31aeee1cd90765b524. 2023-07-22 07:10:33,046 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690009829529.4416e68564631a31aeee1cd90765b524. 2023-07-22 07:10:33,046 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690009829529.4416e68564631a31aeee1cd90765b524. after waiting 0 ms 2023-07-22 07:10:33,046 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690009829529.4416e68564631a31aeee1cd90765b524. 2023-07-22 07:10:33,047 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=39 updating hbase:meta row=97ad6d84bd0754d12df91fb12808fc69, regionState=CLOSED 2023-07-22 07:10:33,047 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,,1690009829529.97ad6d84bd0754d12df91fb12808fc69.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1690009833047"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690009833047"}]},"ts":"1690009833047"} 2023-07-22 07:10:33,049 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=45, resume processing ppid=40 2023-07-22 07:10:33,049 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=45, ppid=40, state=SUCCESS; CloseRegionProcedure cb5cc297e84ee621ef116b994f44e02c, server=jenkins-hbase4.apache.org,34133,1690009825283 in 183 msec 2023-07-22 07:10:33,057 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=40, ppid=38, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=cb5cc297e84ee621ef116b994f44e02c, UNASSIGN in 203 msec 2023-07-22 07:10:33,058 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=46, resume processing ppid=39 2023-07-22 07:10:33,058 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=46, ppid=39, state=SUCCESS; CloseRegionProcedure 97ad6d84bd0754d12df91fb12808fc69, server=jenkins-hbase4.apache.org,33357,1690009829125 in 194 msec 2023-07-22 07:10:33,060 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=39, ppid=38, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=97ad6d84bd0754d12df91fb12808fc69, UNASSIGN in 212 msec 2023-07-22 07:10:33,072 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/data/default/Group_testTableMoveTruncateAndDrop/4416e68564631a31aeee1cd90765b524/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-22 07:10:33,072 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/data/default/Group_testTableMoveTruncateAndDrop/6e16235d88cb385592fd0b9887a65c2f/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-22 07:10:33,073 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690009829529.4416e68564631a31aeee1cd90765b524. 2023-07-22 07:10:33,073 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,zzzzz,1690009829529.6e16235d88cb385592fd0b9887a65c2f. 2023-07-22 07:10:33,073 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 4416e68564631a31aeee1cd90765b524: 2023-07-22 07:10:33,073 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 6e16235d88cb385592fd0b9887a65c2f: 2023-07-22 07:10:33,079 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 4416e68564631a31aeee1cd90765b524 2023-07-22 07:10:33,081 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=41 updating hbase:meta row=4416e68564631a31aeee1cd90765b524, regionState=CLOSED 2023-07-22 07:10:33,081 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1690009829529.4416e68564631a31aeee1cd90765b524.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690009833081"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690009833081"}]},"ts":"1690009833081"} 2023-07-22 07:10:33,081 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 6e16235d88cb385592fd0b9887a65c2f 2023-07-22 07:10:33,081 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close c8e1347c16201fa7264dd07badd12c71 2023-07-22 07:10:33,083 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing c8e1347c16201fa7264dd07badd12c71, disabling compactions & flushes 2023-07-22 07:10:33,083 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690009829529.c8e1347c16201fa7264dd07badd12c71. 2023-07-22 07:10:33,083 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690009829529.c8e1347c16201fa7264dd07badd12c71. 2023-07-22 07:10:33,083 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=43 updating hbase:meta row=6e16235d88cb385592fd0b9887a65c2f, regionState=CLOSED 2023-07-22 07:10:33,083 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690009829529.c8e1347c16201fa7264dd07badd12c71. after waiting 0 ms 2023-07-22 07:10:33,083 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690009829529.c8e1347c16201fa7264dd07badd12c71. 2023-07-22 07:10:33,083 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1690009829529.6e16235d88cb385592fd0b9887a65c2f.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1690009833083"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690009833083"}]},"ts":"1690009833083"} 2023-07-22 07:10:33,094 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=48, resume processing ppid=41 2023-07-22 07:10:33,094 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=48, ppid=41, state=SUCCESS; CloseRegionProcedure 4416e68564631a31aeee1cd90765b524, server=jenkins-hbase4.apache.org,33357,1690009829125 in 222 msec 2023-07-22 07:10:33,094 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=44, resume processing ppid=43 2023-07-22 07:10:33,094 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=44, ppid=43, state=SUCCESS; CloseRegionProcedure 6e16235d88cb385592fd0b9887a65c2f, server=jenkins-hbase4.apache.org,34133,1690009825283 in 234 msec 2023-07-22 07:10:33,096 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=41, ppid=38, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=4416e68564631a31aeee1cd90765b524, UNASSIGN in 248 msec 2023-07-22 07:10:33,096 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=43, ppid=38, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=6e16235d88cb385592fd0b9887a65c2f, UNASSIGN in 248 msec 2023-07-22 07:10:33,118 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/data/default/Group_testTableMoveTruncateAndDrop/c8e1347c16201fa7264dd07badd12c71/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-22 07:10:33,119 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690009829529.c8e1347c16201fa7264dd07badd12c71. 2023-07-22 07:10:33,119 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for c8e1347c16201fa7264dd07badd12c71: 2023-07-22 07:10:33,123 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed c8e1347c16201fa7264dd07badd12c71 2023-07-22 07:10:33,123 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=42 updating hbase:meta row=c8e1347c16201fa7264dd07badd12c71, regionState=CLOSED 2023-07-22 07:10:33,123 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1690009829529.c8e1347c16201fa7264dd07badd12c71.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690009833123"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690009833123"}]},"ts":"1690009833123"} 2023-07-22 07:10:33,140 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=47, resume processing ppid=42 2023-07-22 07:10:33,140 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=47, ppid=42, state=SUCCESS; CloseRegionProcedure c8e1347c16201fa7264dd07badd12c71, server=jenkins-hbase4.apache.org,34133,1690009825283 in 265 msec 2023-07-22 07:10:33,153 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.MasterRpcServices(1230): Checking to see if procedure is done pid=38 2023-07-22 07:10:33,155 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=42, resume processing ppid=38 2023-07-22 07:10:33,155 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=42, ppid=38, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=c8e1347c16201fa7264dd07badd12c71, UNASSIGN in 294 msec 2023-07-22 07:10:33,162 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690009833161"}]},"ts":"1690009833161"} 2023-07-22 07:10:33,168 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=DISABLED in hbase:meta 2023-07-22 07:10:33,171 INFO [PEWorker-5] procedure.DisableTableProcedure(305): Set Group_testTableMoveTruncateAndDrop to state=DISABLED 2023-07-22 07:10:33,174 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=38, state=SUCCESS; DisableTableProcedure table=Group_testTableMoveTruncateAndDrop in 349 msec 2023-07-22 07:10:33,456 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.MasterRpcServices(1230): Checking to see if procedure is done pid=38 2023-07-22 07:10:33,457 INFO [Listener at localhost/46507] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:Group_testTableMoveTruncateAndDrop, procId: 38 completed 2023-07-22 07:10:33,458 INFO [Listener at localhost/46507] client.HBaseAdmin$13(770): Started truncating Group_testTableMoveTruncateAndDrop 2023-07-22 07:10:33,463 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.HMaster$6(2260): Client=jenkins//172.31.14.131 truncate Group_testTableMoveTruncateAndDrop 2023-07-22 07:10:33,472 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] procedure2.ProcedureExecutor(1029): Stored pid=49, state=RUNNABLE:TRUNCATE_TABLE_PRE_OPERATION; TruncateTableProcedure (table=Group_testTableMoveTruncateAndDrop preserveSplits=true) 2023-07-22 07:10:33,475 DEBUG [PEWorker-3] procedure.TruncateTableProcedure(87): waiting for 'Group_testTableMoveTruncateAndDrop' regions in transition 2023-07-22 07:10:33,476 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.MasterRpcServices(1230): Checking to see if procedure is done pid=49 2023-07-22 07:10:33,495 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/.tmp/data/default/Group_testTableMoveTruncateAndDrop/cb5cc297e84ee621ef116b994f44e02c 2023-07-22 07:10:33,495 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/.tmp/data/default/Group_testTableMoveTruncateAndDrop/6e16235d88cb385592fd0b9887a65c2f 2023-07-22 07:10:33,495 DEBUG [HFileArchiver-8] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/.tmp/data/default/Group_testTableMoveTruncateAndDrop/97ad6d84bd0754d12df91fb12808fc69 2023-07-22 07:10:33,495 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/.tmp/data/default/Group_testTableMoveTruncateAndDrop/c8e1347c16201fa7264dd07badd12c71 2023-07-22 07:10:33,495 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/.tmp/data/default/Group_testTableMoveTruncateAndDrop/4416e68564631a31aeee1cd90765b524 2023-07-22 07:10:33,504 DEBUG [HFileArchiver-3] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/.tmp/data/default/Group_testTableMoveTruncateAndDrop/c8e1347c16201fa7264dd07badd12c71/f, FileablePath, hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/.tmp/data/default/Group_testTableMoveTruncateAndDrop/c8e1347c16201fa7264dd07badd12c71/recovered.edits] 2023-07-22 07:10:33,504 DEBUG [HFileArchiver-8] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/.tmp/data/default/Group_testTableMoveTruncateAndDrop/97ad6d84bd0754d12df91fb12808fc69/f, FileablePath, hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/.tmp/data/default/Group_testTableMoveTruncateAndDrop/97ad6d84bd0754d12df91fb12808fc69/recovered.edits] 2023-07-22 07:10:33,504 DEBUG [HFileArchiver-1] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/.tmp/data/default/Group_testTableMoveTruncateAndDrop/cb5cc297e84ee621ef116b994f44e02c/f, FileablePath, hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/.tmp/data/default/Group_testTableMoveTruncateAndDrop/cb5cc297e84ee621ef116b994f44e02c/recovered.edits] 2023-07-22 07:10:33,505 DEBUG [HFileArchiver-2] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/.tmp/data/default/Group_testTableMoveTruncateAndDrop/4416e68564631a31aeee1cd90765b524/f, FileablePath, hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/.tmp/data/default/Group_testTableMoveTruncateAndDrop/4416e68564631a31aeee1cd90765b524/recovered.edits] 2023-07-22 07:10:33,505 DEBUG [HFileArchiver-4] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/.tmp/data/default/Group_testTableMoveTruncateAndDrop/6e16235d88cb385592fd0b9887a65c2f/f, FileablePath, hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/.tmp/data/default/Group_testTableMoveTruncateAndDrop/6e16235d88cb385592fd0b9887a65c2f/recovered.edits] 2023-07-22 07:10:33,532 DEBUG [HFileArchiver-3] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/.tmp/data/default/Group_testTableMoveTruncateAndDrop/c8e1347c16201fa7264dd07badd12c71/recovered.edits/7.seqid to hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/archive/data/default/Group_testTableMoveTruncateAndDrop/c8e1347c16201fa7264dd07badd12c71/recovered.edits/7.seqid 2023-07-22 07:10:33,532 DEBUG [HFileArchiver-2] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/.tmp/data/default/Group_testTableMoveTruncateAndDrop/4416e68564631a31aeee1cd90765b524/recovered.edits/7.seqid to hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/archive/data/default/Group_testTableMoveTruncateAndDrop/4416e68564631a31aeee1cd90765b524/recovered.edits/7.seqid 2023-07-22 07:10:33,536 DEBUG [HFileArchiver-1] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/.tmp/data/default/Group_testTableMoveTruncateAndDrop/cb5cc297e84ee621ef116b994f44e02c/recovered.edits/7.seqid to hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/archive/data/default/Group_testTableMoveTruncateAndDrop/cb5cc297e84ee621ef116b994f44e02c/recovered.edits/7.seqid 2023-07-22 07:10:33,539 DEBUG [HFileArchiver-2] backup.HFileArchiver(596): Deleted hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/.tmp/data/default/Group_testTableMoveTruncateAndDrop/4416e68564631a31aeee1cd90765b524 2023-07-22 07:10:33,540 DEBUG [HFileArchiver-3] backup.HFileArchiver(596): Deleted hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/.tmp/data/default/Group_testTableMoveTruncateAndDrop/c8e1347c16201fa7264dd07badd12c71 2023-07-22 07:10:33,540 DEBUG [HFileArchiver-1] backup.HFileArchiver(596): Deleted hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/.tmp/data/default/Group_testTableMoveTruncateAndDrop/cb5cc297e84ee621ef116b994f44e02c 2023-07-22 07:10:33,540 DEBUG [HFileArchiver-4] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/.tmp/data/default/Group_testTableMoveTruncateAndDrop/6e16235d88cb385592fd0b9887a65c2f/recovered.edits/7.seqid to hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/archive/data/default/Group_testTableMoveTruncateAndDrop/6e16235d88cb385592fd0b9887a65c2f/recovered.edits/7.seqid 2023-07-22 07:10:33,541 DEBUG [HFileArchiver-4] backup.HFileArchiver(596): Deleted hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/.tmp/data/default/Group_testTableMoveTruncateAndDrop/6e16235d88cb385592fd0b9887a65c2f 2023-07-22 07:10:33,542 DEBUG [HFileArchiver-8] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/.tmp/data/default/Group_testTableMoveTruncateAndDrop/97ad6d84bd0754d12df91fb12808fc69/recovered.edits/7.seqid to hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/archive/data/default/Group_testTableMoveTruncateAndDrop/97ad6d84bd0754d12df91fb12808fc69/recovered.edits/7.seqid 2023-07-22 07:10:33,544 DEBUG [HFileArchiver-8] backup.HFileArchiver(596): Deleted hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/.tmp/data/default/Group_testTableMoveTruncateAndDrop/97ad6d84bd0754d12df91fb12808fc69 2023-07-22 07:10:33,544 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived Group_testTableMoveTruncateAndDrop regions 2023-07-22 07:10:33,578 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.MasterRpcServices(1230): Checking to see if procedure is done pid=49 2023-07-22 07:10:33,581 WARN [PEWorker-3] procedure.DeleteTableProcedure(384): Deleting some vestigial 5 rows of Group_testTableMoveTruncateAndDrop from hbase:meta 2023-07-22 07:10:33,596 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(421): Removing 'Group_testTableMoveTruncateAndDrop' descriptor. 2023-07-22 07:10:33,597 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(411): Removing 'Group_testTableMoveTruncateAndDrop' from region states. 2023-07-22 07:10:33,597 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,,1690009829529.97ad6d84bd0754d12df91fb12808fc69.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1690009833597"}]},"ts":"9223372036854775807"} 2023-07-22 07:10:33,597 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1690009829529.cb5cc297e84ee621ef116b994f44e02c.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1690009833597"}]},"ts":"9223372036854775807"} 2023-07-22 07:10:33,597 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1690009829529.4416e68564631a31aeee1cd90765b524.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1690009833597"}]},"ts":"9223372036854775807"} 2023-07-22 07:10:33,597 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1690009829529.c8e1347c16201fa7264dd07badd12c71.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1690009833597"}]},"ts":"9223372036854775807"} 2023-07-22 07:10:33,598 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1690009829529.6e16235d88cb385592fd0b9887a65c2f.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1690009833597"}]},"ts":"9223372036854775807"} 2023-07-22 07:10:33,601 INFO [PEWorker-3] hbase.MetaTableAccessor(1788): Deleted 5 regions from META 2023-07-22 07:10:33,601 DEBUG [PEWorker-3] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => 97ad6d84bd0754d12df91fb12808fc69, NAME => 'Group_testTableMoveTruncateAndDrop,,1690009829529.97ad6d84bd0754d12df91fb12808fc69.', STARTKEY => '', ENDKEY => 'aaaaa'}, {ENCODED => cb5cc297e84ee621ef116b994f44e02c, NAME => 'Group_testTableMoveTruncateAndDrop,aaaaa,1690009829529.cb5cc297e84ee621ef116b994f44e02c.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'}, {ENCODED => 4416e68564631a31aeee1cd90765b524, NAME => 'Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690009829529.4416e68564631a31aeee1cd90765b524.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'}, {ENCODED => c8e1347c16201fa7264dd07badd12c71, NAME => 'Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690009829529.c8e1347c16201fa7264dd07badd12c71.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'}, {ENCODED => 6e16235d88cb385592fd0b9887a65c2f, NAME => 'Group_testTableMoveTruncateAndDrop,zzzzz,1690009829529.6e16235d88cb385592fd0b9887a65c2f.', STARTKEY => 'zzzzz', ENDKEY => ''}] 2023-07-22 07:10:33,601 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(415): Marking 'Group_testTableMoveTruncateAndDrop' as deleted. 2023-07-22 07:10:33,602 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1690009833602"}]},"ts":"9223372036854775807"} 2023-07-22 07:10:33,604 INFO [PEWorker-3] hbase.MetaTableAccessor(1658): Deleted table Group_testTableMoveTruncateAndDrop state from META 2023-07-22 07:10:33,616 DEBUG [HFileArchiver-7] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/.tmp/data/default/Group_testTableMoveTruncateAndDrop/8978337544dc8a712dcfa0c3594dbd17 2023-07-22 07:10:33,616 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/.tmp/data/default/Group_testTableMoveTruncateAndDrop/9322e82492ab3602e0fc027866777177 2023-07-22 07:10:33,616 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/.tmp/data/default/Group_testTableMoveTruncateAndDrop/13ae0d02ff860c7779b221b88913c7c5 2023-07-22 07:10:33,616 DEBUG [HFileArchiver-6] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/.tmp/data/default/Group_testTableMoveTruncateAndDrop/da3a8b001d4b76a631121dfdaa851639 2023-07-22 07:10:33,616 DEBUG [HFileArchiver-5] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/.tmp/data/default/Group_testTableMoveTruncateAndDrop/fc5476138d402953894afaaa5ff1a282 2023-07-22 07:10:33,617 DEBUG [HFileArchiver-7] backup.HFileArchiver(153): Directory hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/.tmp/data/default/Group_testTableMoveTruncateAndDrop/8978337544dc8a712dcfa0c3594dbd17 empty. 2023-07-22 07:10:33,618 DEBUG [HFileArchiver-2] backup.HFileArchiver(153): Directory hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/.tmp/data/default/Group_testTableMoveTruncateAndDrop/13ae0d02ff860c7779b221b88913c7c5 empty. 2023-07-22 07:10:33,618 DEBUG [HFileArchiver-6] backup.HFileArchiver(153): Directory hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/.tmp/data/default/Group_testTableMoveTruncateAndDrop/da3a8b001d4b76a631121dfdaa851639 empty. 2023-07-22 07:10:33,618 DEBUG [HFileArchiver-3] backup.HFileArchiver(153): Directory hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/.tmp/data/default/Group_testTableMoveTruncateAndDrop/9322e82492ab3602e0fc027866777177 empty. 2023-07-22 07:10:33,618 DEBUG [HFileArchiver-5] backup.HFileArchiver(153): Directory hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/.tmp/data/default/Group_testTableMoveTruncateAndDrop/fc5476138d402953894afaaa5ff1a282 empty. 2023-07-22 07:10:33,618 DEBUG [HFileArchiver-7] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/.tmp/data/default/Group_testTableMoveTruncateAndDrop/8978337544dc8a712dcfa0c3594dbd17 2023-07-22 07:10:33,619 DEBUG [HFileArchiver-3] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/.tmp/data/default/Group_testTableMoveTruncateAndDrop/9322e82492ab3602e0fc027866777177 2023-07-22 07:10:33,619 DEBUG [HFileArchiver-5] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/.tmp/data/default/Group_testTableMoveTruncateAndDrop/fc5476138d402953894afaaa5ff1a282 2023-07-22 07:10:33,619 DEBUG [HFileArchiver-2] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/.tmp/data/default/Group_testTableMoveTruncateAndDrop/13ae0d02ff860c7779b221b88913c7c5 2023-07-22 07:10:33,619 DEBUG [HFileArchiver-6] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/.tmp/data/default/Group_testTableMoveTruncateAndDrop/da3a8b001d4b76a631121dfdaa851639 2023-07-22 07:10:33,619 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived Group_testTableMoveTruncateAndDrop regions 2023-07-22 07:10:33,653 DEBUG [PEWorker-3] util.FSTableDescriptors(570): Wrote into hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/.tmp/data/default/Group_testTableMoveTruncateAndDrop/.tabledesc/.tableinfo.0000000001 2023-07-22 07:10:33,659 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(7675): creating {ENCODED => fc5476138d402953894afaaa5ff1a282, NAME => 'Group_testTableMoveTruncateAndDrop,aaaaa,1690009833547.fc5476138d402953894afaaa5ff1a282.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/.tmp 2023-07-22 07:10:33,659 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(7675): creating {ENCODED => 8978337544dc8a712dcfa0c3594dbd17, NAME => 'Group_testTableMoveTruncateAndDrop,,1690009833547.8978337544dc8a712dcfa0c3594dbd17.', STARTKEY => '', ENDKEY => 'aaaaa'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/.tmp 2023-07-22 07:10:33,660 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(7675): creating {ENCODED => da3a8b001d4b76a631121dfdaa851639, NAME => 'Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690009833547.da3a8b001d4b76a631121dfdaa851639.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/.tmp 2023-07-22 07:10:33,670 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-hbase.properties,hadoop-metrics2.properties 2023-07-22 07:10:33,776 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,,1690009833547.8978337544dc8a712dcfa0c3594dbd17.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-22 07:10:33,776 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1604): Closing 8978337544dc8a712dcfa0c3594dbd17, disabling compactions & flushes 2023-07-22 07:10:33,776 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,,1690009833547.8978337544dc8a712dcfa0c3594dbd17. 2023-07-22 07:10:33,776 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,,1690009833547.8978337544dc8a712dcfa0c3594dbd17. 2023-07-22 07:10:33,777 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,,1690009833547.8978337544dc8a712dcfa0c3594dbd17. after waiting 0 ms 2023-07-22 07:10:33,777 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,,1690009833547.8978337544dc8a712dcfa0c3594dbd17. 2023-07-22 07:10:33,777 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,,1690009833547.8978337544dc8a712dcfa0c3594dbd17. 2023-07-22 07:10:33,777 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1558): Region close journal for 8978337544dc8a712dcfa0c3594dbd17: 2023-07-22 07:10:33,777 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(7675): creating {ENCODED => 13ae0d02ff860c7779b221b88913c7c5, NAME => 'Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690009833547.13ae0d02ff860c7779b221b88913c7c5.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/.tmp 2023-07-22 07:10:33,784 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.MasterRpcServices(1230): Checking to see if procedure is done pid=49 2023-07-22 07:10:33,791 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690009833547.da3a8b001d4b76a631121dfdaa851639.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-22 07:10:33,792 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1604): Closing da3a8b001d4b76a631121dfdaa851639, disabling compactions & flushes 2023-07-22 07:10:33,792 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,aaaaa,1690009833547.fc5476138d402953894afaaa5ff1a282.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-22 07:10:33,792 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690009833547.da3a8b001d4b76a631121dfdaa851639. 2023-07-22 07:10:33,792 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1604): Closing fc5476138d402953894afaaa5ff1a282, disabling compactions & flushes 2023-07-22 07:10:33,792 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690009833547.da3a8b001d4b76a631121dfdaa851639. 2023-07-22 07:10:33,792 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,aaaaa,1690009833547.fc5476138d402953894afaaa5ff1a282. 2023-07-22 07:10:33,792 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690009833547.da3a8b001d4b76a631121dfdaa851639. after waiting 0 ms 2023-07-22 07:10:33,792 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1690009833547.fc5476138d402953894afaaa5ff1a282. 2023-07-22 07:10:33,792 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690009833547.da3a8b001d4b76a631121dfdaa851639. 2023-07-22 07:10:33,792 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1690009833547.fc5476138d402953894afaaa5ff1a282. after waiting 0 ms 2023-07-22 07:10:33,792 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690009833547.da3a8b001d4b76a631121dfdaa851639. 2023-07-22 07:10:33,792 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,aaaaa,1690009833547.fc5476138d402953894afaaa5ff1a282. 2023-07-22 07:10:33,792 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,aaaaa,1690009833547.fc5476138d402953894afaaa5ff1a282. 2023-07-22 07:10:33,792 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1558): Region close journal for da3a8b001d4b76a631121dfdaa851639: 2023-07-22 07:10:33,792 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1558): Region close journal for fc5476138d402953894afaaa5ff1a282: 2023-07-22 07:10:33,793 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(7675): creating {ENCODED => 9322e82492ab3602e0fc027866777177, NAME => 'Group_testTableMoveTruncateAndDrop,zzzzz,1690009833547.9322e82492ab3602e0fc027866777177.', STARTKEY => 'zzzzz', ENDKEY => ''}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/.tmp 2023-07-22 07:10:33,829 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,zzzzz,1690009833547.9322e82492ab3602e0fc027866777177.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-22 07:10:33,829 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1604): Closing 9322e82492ab3602e0fc027866777177, disabling compactions & flushes 2023-07-22 07:10:33,829 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,zzzzz,1690009833547.9322e82492ab3602e0fc027866777177. 2023-07-22 07:10:33,829 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1690009833547.9322e82492ab3602e0fc027866777177. 2023-07-22 07:10:33,829 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1690009833547.9322e82492ab3602e0fc027866777177. after waiting 0 ms 2023-07-22 07:10:33,830 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,zzzzz,1690009833547.9322e82492ab3602e0fc027866777177. 2023-07-22 07:10:33,830 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,zzzzz,1690009833547.9322e82492ab3602e0fc027866777177. 2023-07-22 07:10:33,830 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1558): Region close journal for 9322e82492ab3602e0fc027866777177: 2023-07-22 07:10:33,831 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690009833547.13ae0d02ff860c7779b221b88913c7c5.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-22 07:10:33,831 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1604): Closing 13ae0d02ff860c7779b221b88913c7c5, disabling compactions & flushes 2023-07-22 07:10:33,831 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690009833547.13ae0d02ff860c7779b221b88913c7c5. 2023-07-22 07:10:33,831 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690009833547.13ae0d02ff860c7779b221b88913c7c5. 2023-07-22 07:10:33,831 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690009833547.13ae0d02ff860c7779b221b88913c7c5. after waiting 0 ms 2023-07-22 07:10:33,831 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690009833547.13ae0d02ff860c7779b221b88913c7c5. 2023-07-22 07:10:33,831 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690009833547.13ae0d02ff860c7779b221b88913c7c5. 2023-07-22 07:10:33,831 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1558): Region close journal for 13ae0d02ff860c7779b221b88913c7c5: 2023-07-22 07:10:33,838 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,,1690009833547.8978337544dc8a712dcfa0c3594dbd17.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1690009833838"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690009833838"}]},"ts":"1690009833838"} 2023-07-22 07:10:33,838 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1690009833547.da3a8b001d4b76a631121dfdaa851639.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690009833838"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690009833838"}]},"ts":"1690009833838"} 2023-07-22 07:10:33,839 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1690009833547.fc5476138d402953894afaaa5ff1a282.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690009833838"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690009833838"}]},"ts":"1690009833838"} 2023-07-22 07:10:33,839 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1690009833547.9322e82492ab3602e0fc027866777177.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1690009833838"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690009833838"}]},"ts":"1690009833838"} 2023-07-22 07:10:33,839 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1690009833547.13ae0d02ff860c7779b221b88913c7c5.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690009833838"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690009833838"}]},"ts":"1690009833838"} 2023-07-22 07:10:33,844 INFO [PEWorker-3] hbase.MetaTableAccessor(1496): Added 5 regions to meta. 2023-07-22 07:10:33,846 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690009833846"}]},"ts":"1690009833846"} 2023-07-22 07:10:33,847 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:namespace' 2023-07-22 07:10:33,848 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:rsgroup' 2023-07-22 07:10:33,849 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:meta' 2023-07-22 07:10:33,850 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(130): Registering adapter for the MetricRegistry: Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver 2023-07-22 07:10:33,851 INFO [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(134): Registering Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver Metrics about HBase MasterObservers 2023-07-22 07:10:33,851 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(130): Registering adapter for the MetricRegistry: RegionServer,sub=Coprocessor.Region.CP_org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-22 07:10:33,851 INFO [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(134): Registering RegionServer,sub=Coprocessor.Region.CP_org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint Metrics about HBase RegionObservers 2023-07-22 07:10:33,851 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(130): Registering adapter for the MetricRegistry: Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint 2023-07-22 07:10:33,851 INFO [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(134): Registering Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint Metrics about HBase MasterObservers 2023-07-22 07:10:33,851 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=ENABLING in hbase:meta 2023-07-22 07:10:33,856 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-22 07:10:33,857 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-22 07:10:33,857 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-22 07:10:33,857 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-22 07:10:33,860 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=50, ppid=49, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=8978337544dc8a712dcfa0c3594dbd17, ASSIGN}, {pid=51, ppid=49, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=fc5476138d402953894afaaa5ff1a282, ASSIGN}, {pid=52, ppid=49, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=da3a8b001d4b76a631121dfdaa851639, ASSIGN}, {pid=53, ppid=49, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=13ae0d02ff860c7779b221b88913c7c5, ASSIGN}, {pid=54, ppid=49, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=9322e82492ab3602e0fc027866777177, ASSIGN}] 2023-07-22 07:10:33,863 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=53, ppid=49, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=13ae0d02ff860c7779b221b88913c7c5, ASSIGN 2023-07-22 07:10:33,863 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=51, ppid=49, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=fc5476138d402953894afaaa5ff1a282, ASSIGN 2023-07-22 07:10:33,863 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=54, ppid=49, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=9322e82492ab3602e0fc027866777177, ASSIGN 2023-07-22 07:10:33,863 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=52, ppid=49, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=da3a8b001d4b76a631121dfdaa851639, ASSIGN 2023-07-22 07:10:33,863 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=50, ppid=49, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=8978337544dc8a712dcfa0c3594dbd17, ASSIGN 2023-07-22 07:10:33,864 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=53, ppid=49, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=13ae0d02ff860c7779b221b88913c7c5, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,33357,1690009829125; forceNewPlan=false, retain=false 2023-07-22 07:10:33,865 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=54, ppid=49, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=9322e82492ab3602e0fc027866777177, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,34133,1690009825283; forceNewPlan=false, retain=false 2023-07-22 07:10:33,865 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=52, ppid=49, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=da3a8b001d4b76a631121dfdaa851639, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,34133,1690009825283; forceNewPlan=false, retain=false 2023-07-22 07:10:33,865 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=50, ppid=49, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=8978337544dc8a712dcfa0c3594dbd17, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,33357,1690009829125; forceNewPlan=false, retain=false 2023-07-22 07:10:33,865 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=51, ppid=49, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=fc5476138d402953894afaaa5ff1a282, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,33357,1690009829125; forceNewPlan=false, retain=false 2023-07-22 07:10:34,015 INFO [jenkins-hbase4:37061] balancer.BaseLoadBalancer(1545): Reassigned 5 regions. 5 retained the pre-restart assignment. 2023-07-22 07:10:34,018 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=53 updating hbase:meta row=13ae0d02ff860c7779b221b88913c7c5, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,33357,1690009829125 2023-07-22 07:10:34,018 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=52 updating hbase:meta row=da3a8b001d4b76a631121dfdaa851639, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,34133,1690009825283 2023-07-22 07:10:34,018 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1690009833547.13ae0d02ff860c7779b221b88913c7c5.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690009834018"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690009834018"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690009834018"}]},"ts":"1690009834018"} 2023-07-22 07:10:34,018 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=51 updating hbase:meta row=fc5476138d402953894afaaa5ff1a282, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,33357,1690009829125 2023-07-22 07:10:34,018 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1690009833547.da3a8b001d4b76a631121dfdaa851639.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690009834018"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690009834018"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690009834018"}]},"ts":"1690009834018"} 2023-07-22 07:10:34,018 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=50 updating hbase:meta row=8978337544dc8a712dcfa0c3594dbd17, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,33357,1690009829125 2023-07-22 07:10:34,018 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=54 updating hbase:meta row=9322e82492ab3602e0fc027866777177, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,34133,1690009825283 2023-07-22 07:10:34,019 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,,1690009833547.8978337544dc8a712dcfa0c3594dbd17.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1690009834018"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690009834018"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690009834018"}]},"ts":"1690009834018"} 2023-07-22 07:10:34,019 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1690009833547.fc5476138d402953894afaaa5ff1a282.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690009834018"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690009834018"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690009834018"}]},"ts":"1690009834018"} 2023-07-22 07:10:34,019 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1690009833547.9322e82492ab3602e0fc027866777177.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1690009834018"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690009834018"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690009834018"}]},"ts":"1690009834018"} 2023-07-22 07:10:34,021 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=55, ppid=53, state=RUNNABLE; OpenRegionProcedure 13ae0d02ff860c7779b221b88913c7c5, server=jenkins-hbase4.apache.org,33357,1690009829125}] 2023-07-22 07:10:34,022 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=56, ppid=52, state=RUNNABLE; OpenRegionProcedure da3a8b001d4b76a631121dfdaa851639, server=jenkins-hbase4.apache.org,34133,1690009825283}] 2023-07-22 07:10:34,023 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=57, ppid=50, state=RUNNABLE; OpenRegionProcedure 8978337544dc8a712dcfa0c3594dbd17, server=jenkins-hbase4.apache.org,33357,1690009829125}] 2023-07-22 07:10:34,024 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=58, ppid=51, state=RUNNABLE; OpenRegionProcedure fc5476138d402953894afaaa5ff1a282, server=jenkins-hbase4.apache.org,33357,1690009829125}] 2023-07-22 07:10:34,025 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=59, ppid=54, state=RUNNABLE; OpenRegionProcedure 9322e82492ab3602e0fc027866777177, server=jenkins-hbase4.apache.org,34133,1690009825283}] 2023-07-22 07:10:34,085 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.MasterRpcServices(1230): Checking to see if procedure is done pid=49 2023-07-22 07:10:34,178 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690009833547.13ae0d02ff860c7779b221b88913c7c5. 2023-07-22 07:10:34,178 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 13ae0d02ff860c7779b221b88913c7c5, NAME => 'Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690009833547.13ae0d02ff860c7779b221b88913c7c5.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'} 2023-07-22 07:10:34,179 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 13ae0d02ff860c7779b221b88913c7c5 2023-07-22 07:10:34,179 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690009833547.13ae0d02ff860c7779b221b88913c7c5.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-22 07:10:34,179 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 13ae0d02ff860c7779b221b88913c7c5 2023-07-22 07:10:34,179 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 13ae0d02ff860c7779b221b88913c7c5 2023-07-22 07:10:34,181 INFO [StoreOpener-13ae0d02ff860c7779b221b88913c7c5-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 13ae0d02ff860c7779b221b88913c7c5 2023-07-22 07:10:34,181 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,zzzzz,1690009833547.9322e82492ab3602e0fc027866777177. 2023-07-22 07:10:34,181 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 9322e82492ab3602e0fc027866777177, NAME => 'Group_testTableMoveTruncateAndDrop,zzzzz,1690009833547.9322e82492ab3602e0fc027866777177.', STARTKEY => 'zzzzz', ENDKEY => ''} 2023-07-22 07:10:34,181 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 9322e82492ab3602e0fc027866777177 2023-07-22 07:10:34,181 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,zzzzz,1690009833547.9322e82492ab3602e0fc027866777177.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-22 07:10:34,182 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 9322e82492ab3602e0fc027866777177 2023-07-22 07:10:34,182 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 9322e82492ab3602e0fc027866777177 2023-07-22 07:10:34,185 DEBUG [StoreOpener-13ae0d02ff860c7779b221b88913c7c5-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/data/default/Group_testTableMoveTruncateAndDrop/13ae0d02ff860c7779b221b88913c7c5/f 2023-07-22 07:10:34,185 DEBUG [StoreOpener-13ae0d02ff860c7779b221b88913c7c5-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/data/default/Group_testTableMoveTruncateAndDrop/13ae0d02ff860c7779b221b88913c7c5/f 2023-07-22 07:10:34,186 INFO [StoreOpener-9322e82492ab3602e0fc027866777177-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 9322e82492ab3602e0fc027866777177 2023-07-22 07:10:34,186 INFO [StoreOpener-13ae0d02ff860c7779b221b88913c7c5-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 13ae0d02ff860c7779b221b88913c7c5 columnFamilyName f 2023-07-22 07:10:34,188 INFO [StoreOpener-13ae0d02ff860c7779b221b88913c7c5-1] regionserver.HStore(310): Store=13ae0d02ff860c7779b221b88913c7c5/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-22 07:10:34,189 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/data/default/Group_testTableMoveTruncateAndDrop/13ae0d02ff860c7779b221b88913c7c5 2023-07-22 07:10:34,190 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/data/default/Group_testTableMoveTruncateAndDrop/13ae0d02ff860c7779b221b88913c7c5 2023-07-22 07:10:34,190 DEBUG [StoreOpener-9322e82492ab3602e0fc027866777177-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/data/default/Group_testTableMoveTruncateAndDrop/9322e82492ab3602e0fc027866777177/f 2023-07-22 07:10:34,190 DEBUG [StoreOpener-9322e82492ab3602e0fc027866777177-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/data/default/Group_testTableMoveTruncateAndDrop/9322e82492ab3602e0fc027866777177/f 2023-07-22 07:10:34,190 INFO [StoreOpener-9322e82492ab3602e0fc027866777177-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 9322e82492ab3602e0fc027866777177 columnFamilyName f 2023-07-22 07:10:34,191 INFO [StoreOpener-9322e82492ab3602e0fc027866777177-1] regionserver.HStore(310): Store=9322e82492ab3602e0fc027866777177/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-22 07:10:34,192 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/data/default/Group_testTableMoveTruncateAndDrop/9322e82492ab3602e0fc027866777177 2023-07-22 07:10:34,194 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/data/default/Group_testTableMoveTruncateAndDrop/9322e82492ab3602e0fc027866777177 2023-07-22 07:10:34,199 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 9322e82492ab3602e0fc027866777177 2023-07-22 07:10:34,200 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 13ae0d02ff860c7779b221b88913c7c5 2023-07-22 07:10:34,203 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/data/default/Group_testTableMoveTruncateAndDrop/9322e82492ab3602e0fc027866777177/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-22 07:10:34,204 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/data/default/Group_testTableMoveTruncateAndDrop/13ae0d02ff860c7779b221b88913c7c5/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-22 07:10:34,204 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 9322e82492ab3602e0fc027866777177; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9408113280, jitterRate=-0.12380117177963257}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-22 07:10:34,204 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 9322e82492ab3602e0fc027866777177: 2023-07-22 07:10:34,205 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,zzzzz,1690009833547.9322e82492ab3602e0fc027866777177., pid=59, masterSystemTime=1690009834176 2023-07-22 07:10:34,205 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 13ae0d02ff860c7779b221b88913c7c5; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11084058400, jitterRate=0.03228338062763214}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-22 07:10:34,205 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 13ae0d02ff860c7779b221b88913c7c5: 2023-07-22 07:10:34,207 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690009833547.13ae0d02ff860c7779b221b88913c7c5., pid=55, masterSystemTime=1690009834173 2023-07-22 07:10:34,207 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,zzzzz,1690009833547.9322e82492ab3602e0fc027866777177. 2023-07-22 07:10:34,207 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,zzzzz,1690009833547.9322e82492ab3602e0fc027866777177. 2023-07-22 07:10:34,208 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690009833547.da3a8b001d4b76a631121dfdaa851639. 2023-07-22 07:10:34,208 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => da3a8b001d4b76a631121dfdaa851639, NAME => 'Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690009833547.da3a8b001d4b76a631121dfdaa851639.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'} 2023-07-22 07:10:34,208 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop da3a8b001d4b76a631121dfdaa851639 2023-07-22 07:10:34,208 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=54 updating hbase:meta row=9322e82492ab3602e0fc027866777177, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,34133,1690009825283 2023-07-22 07:10:34,208 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690009833547.da3a8b001d4b76a631121dfdaa851639.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-22 07:10:34,209 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for da3a8b001d4b76a631121dfdaa851639 2023-07-22 07:10:34,209 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1690009833547.9322e82492ab3602e0fc027866777177.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1690009834208"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690009834208"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690009834208"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690009834208"}]},"ts":"1690009834208"} 2023-07-22 07:10:34,209 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for da3a8b001d4b76a631121dfdaa851639 2023-07-22 07:10:34,214 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=53 updating hbase:meta row=13ae0d02ff860c7779b221b88913c7c5, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,33357,1690009829125 2023-07-22 07:10:34,214 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1690009833547.13ae0d02ff860c7779b221b88913c7c5.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690009834214"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690009834214"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690009834214"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690009834214"}]},"ts":"1690009834214"} 2023-07-22 07:10:34,215 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690009833547.13ae0d02ff860c7779b221b88913c7c5. 2023-07-22 07:10:34,215 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690009833547.13ae0d02ff860c7779b221b88913c7c5. 2023-07-22 07:10:34,215 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,,1690009833547.8978337544dc8a712dcfa0c3594dbd17. 2023-07-22 07:10:34,215 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=59, resume processing ppid=54 2023-07-22 07:10:34,215 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 8978337544dc8a712dcfa0c3594dbd17, NAME => 'Group_testTableMoveTruncateAndDrop,,1690009833547.8978337544dc8a712dcfa0c3594dbd17.', STARTKEY => '', ENDKEY => 'aaaaa'} 2023-07-22 07:10:34,216 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 8978337544dc8a712dcfa0c3594dbd17 2023-07-22 07:10:34,216 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,,1690009833547.8978337544dc8a712dcfa0c3594dbd17.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-22 07:10:34,216 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 8978337544dc8a712dcfa0c3594dbd17 2023-07-22 07:10:34,216 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 8978337544dc8a712dcfa0c3594dbd17 2023-07-22 07:10:34,215 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=59, ppid=54, state=SUCCESS; OpenRegionProcedure 9322e82492ab3602e0fc027866777177, server=jenkins-hbase4.apache.org,34133,1690009825283 in 187 msec 2023-07-22 07:10:34,220 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=54, ppid=49, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=9322e82492ab3602e0fc027866777177, ASSIGN in 355 msec 2023-07-22 07:10:34,223 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=55, resume processing ppid=53 2023-07-22 07:10:34,223 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=55, ppid=53, state=SUCCESS; OpenRegionProcedure 13ae0d02ff860c7779b221b88913c7c5, server=jenkins-hbase4.apache.org,33357,1690009829125 in 199 msec 2023-07-22 07:10:34,225 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=53, ppid=49, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=13ae0d02ff860c7779b221b88913c7c5, ASSIGN in 363 msec 2023-07-22 07:10:34,227 INFO [StoreOpener-da3a8b001d4b76a631121dfdaa851639-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region da3a8b001d4b76a631121dfdaa851639 2023-07-22 07:10:34,227 INFO [StoreOpener-8978337544dc8a712dcfa0c3594dbd17-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 8978337544dc8a712dcfa0c3594dbd17 2023-07-22 07:10:34,229 DEBUG [StoreOpener-da3a8b001d4b76a631121dfdaa851639-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/data/default/Group_testTableMoveTruncateAndDrop/da3a8b001d4b76a631121dfdaa851639/f 2023-07-22 07:10:34,229 DEBUG [StoreOpener-da3a8b001d4b76a631121dfdaa851639-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/data/default/Group_testTableMoveTruncateAndDrop/da3a8b001d4b76a631121dfdaa851639/f 2023-07-22 07:10:34,229 INFO [StoreOpener-da3a8b001d4b76a631121dfdaa851639-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region da3a8b001d4b76a631121dfdaa851639 columnFamilyName f 2023-07-22 07:10:34,230 DEBUG [StoreOpener-8978337544dc8a712dcfa0c3594dbd17-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/data/default/Group_testTableMoveTruncateAndDrop/8978337544dc8a712dcfa0c3594dbd17/f 2023-07-22 07:10:34,230 DEBUG [StoreOpener-8978337544dc8a712dcfa0c3594dbd17-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/data/default/Group_testTableMoveTruncateAndDrop/8978337544dc8a712dcfa0c3594dbd17/f 2023-07-22 07:10:34,230 INFO [StoreOpener-8978337544dc8a712dcfa0c3594dbd17-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 8978337544dc8a712dcfa0c3594dbd17 columnFamilyName f 2023-07-22 07:10:34,231 INFO [StoreOpener-da3a8b001d4b76a631121dfdaa851639-1] regionserver.HStore(310): Store=da3a8b001d4b76a631121dfdaa851639/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-22 07:10:34,232 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/data/default/Group_testTableMoveTruncateAndDrop/da3a8b001d4b76a631121dfdaa851639 2023-07-22 07:10:34,232 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/data/default/Group_testTableMoveTruncateAndDrop/da3a8b001d4b76a631121dfdaa851639 2023-07-22 07:10:34,235 INFO [StoreOpener-8978337544dc8a712dcfa0c3594dbd17-1] regionserver.HStore(310): Store=8978337544dc8a712dcfa0c3594dbd17/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-22 07:10:34,236 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/data/default/Group_testTableMoveTruncateAndDrop/8978337544dc8a712dcfa0c3594dbd17 2023-07-22 07:10:34,236 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/data/default/Group_testTableMoveTruncateAndDrop/8978337544dc8a712dcfa0c3594dbd17 2023-07-22 07:10:34,237 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for da3a8b001d4b76a631121dfdaa851639 2023-07-22 07:10:34,241 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 8978337544dc8a712dcfa0c3594dbd17 2023-07-22 07:10:34,260 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/data/default/Group_testTableMoveTruncateAndDrop/da3a8b001d4b76a631121dfdaa851639/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-22 07:10:34,261 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened da3a8b001d4b76a631121dfdaa851639; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9397846080, jitterRate=-0.12475737929344177}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-22 07:10:34,261 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for da3a8b001d4b76a631121dfdaa851639: 2023-07-22 07:10:34,262 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690009833547.da3a8b001d4b76a631121dfdaa851639., pid=56, masterSystemTime=1690009834176 2023-07-22 07:10:34,263 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/data/default/Group_testTableMoveTruncateAndDrop/8978337544dc8a712dcfa0c3594dbd17/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-22 07:10:34,264 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 8978337544dc8a712dcfa0c3594dbd17; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9577998400, jitterRate=-0.10797938704490662}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-22 07:10:34,264 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 8978337544dc8a712dcfa0c3594dbd17: 2023-07-22 07:10:34,265 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,,1690009833547.8978337544dc8a712dcfa0c3594dbd17., pid=57, masterSystemTime=1690009834173 2023-07-22 07:10:34,269 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690009833547.da3a8b001d4b76a631121dfdaa851639. 2023-07-22 07:10:34,270 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690009833547.da3a8b001d4b76a631121dfdaa851639. 2023-07-22 07:10:34,271 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=52 updating hbase:meta row=da3a8b001d4b76a631121dfdaa851639, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,34133,1690009825283 2023-07-22 07:10:34,271 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1690009833547.da3a8b001d4b76a631121dfdaa851639.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690009834270"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690009834270"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690009834270"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690009834270"}]},"ts":"1690009834270"} 2023-07-22 07:10:34,272 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=50 updating hbase:meta row=8978337544dc8a712dcfa0c3594dbd17, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,33357,1690009829125 2023-07-22 07:10:34,272 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,,1690009833547.8978337544dc8a712dcfa0c3594dbd17.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1690009834272"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690009834272"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690009834272"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690009834272"}]},"ts":"1690009834272"} 2023-07-22 07:10:34,273 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,,1690009833547.8978337544dc8a712dcfa0c3594dbd17. 2023-07-22 07:10:34,273 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,,1690009833547.8978337544dc8a712dcfa0c3594dbd17. 2023-07-22 07:10:34,273 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,aaaaa,1690009833547.fc5476138d402953894afaaa5ff1a282. 2023-07-22 07:10:34,273 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => fc5476138d402953894afaaa5ff1a282, NAME => 'Group_testTableMoveTruncateAndDrop,aaaaa,1690009833547.fc5476138d402953894afaaa5ff1a282.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'} 2023-07-22 07:10:34,274 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop fc5476138d402953894afaaa5ff1a282 2023-07-22 07:10:34,274 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,aaaaa,1690009833547.fc5476138d402953894afaaa5ff1a282.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-22 07:10:34,274 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for fc5476138d402953894afaaa5ff1a282 2023-07-22 07:10:34,274 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for fc5476138d402953894afaaa5ff1a282 2023-07-22 07:10:34,276 INFO [StoreOpener-fc5476138d402953894afaaa5ff1a282-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region fc5476138d402953894afaaa5ff1a282 2023-07-22 07:10:34,279 DEBUG [StoreOpener-fc5476138d402953894afaaa5ff1a282-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/data/default/Group_testTableMoveTruncateAndDrop/fc5476138d402953894afaaa5ff1a282/f 2023-07-22 07:10:34,279 DEBUG [StoreOpener-fc5476138d402953894afaaa5ff1a282-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/data/default/Group_testTableMoveTruncateAndDrop/fc5476138d402953894afaaa5ff1a282/f 2023-07-22 07:10:34,281 INFO [StoreOpener-fc5476138d402953894afaaa5ff1a282-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region fc5476138d402953894afaaa5ff1a282 columnFamilyName f 2023-07-22 07:10:34,282 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=56, resume processing ppid=52 2023-07-22 07:10:34,282 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=56, ppid=52, state=SUCCESS; OpenRegionProcedure da3a8b001d4b76a631121dfdaa851639, server=jenkins-hbase4.apache.org,34133,1690009825283 in 253 msec 2023-07-22 07:10:34,282 INFO [StoreOpener-fc5476138d402953894afaaa5ff1a282-1] regionserver.HStore(310): Store=fc5476138d402953894afaaa5ff1a282/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-22 07:10:34,282 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=57, resume processing ppid=50 2023-07-22 07:10:34,283 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=57, ppid=50, state=SUCCESS; OpenRegionProcedure 8978337544dc8a712dcfa0c3594dbd17, server=jenkins-hbase4.apache.org,33357,1690009829125 in 252 msec 2023-07-22 07:10:34,285 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/data/default/Group_testTableMoveTruncateAndDrop/fc5476138d402953894afaaa5ff1a282 2023-07-22 07:10:34,285 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=50, ppid=49, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=8978337544dc8a712dcfa0c3594dbd17, ASSIGN in 426 msec 2023-07-22 07:10:34,285 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=52, ppid=49, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=da3a8b001d4b76a631121dfdaa851639, ASSIGN in 422 msec 2023-07-22 07:10:34,285 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/data/default/Group_testTableMoveTruncateAndDrop/fc5476138d402953894afaaa5ff1a282 2023-07-22 07:10:34,289 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for fc5476138d402953894afaaa5ff1a282 2023-07-22 07:10:34,300 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/data/default/Group_testTableMoveTruncateAndDrop/fc5476138d402953894afaaa5ff1a282/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-22 07:10:34,301 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened fc5476138d402953894afaaa5ff1a282; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11996102240, jitterRate=0.1172240823507309}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-22 07:10:34,301 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for fc5476138d402953894afaaa5ff1a282: 2023-07-22 07:10:34,302 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,aaaaa,1690009833547.fc5476138d402953894afaaa5ff1a282., pid=58, masterSystemTime=1690009834173 2023-07-22 07:10:34,306 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,aaaaa,1690009833547.fc5476138d402953894afaaa5ff1a282. 2023-07-22 07:10:34,306 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,aaaaa,1690009833547.fc5476138d402953894afaaa5ff1a282. 2023-07-22 07:10:34,306 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=51 updating hbase:meta row=fc5476138d402953894afaaa5ff1a282, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,33357,1690009829125 2023-07-22 07:10:34,307 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1690009833547.fc5476138d402953894afaaa5ff1a282.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690009834306"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690009834306"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690009834306"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690009834306"}]},"ts":"1690009834306"} 2023-07-22 07:10:34,314 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=58, resume processing ppid=51 2023-07-22 07:10:34,314 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=58, ppid=51, state=SUCCESS; OpenRegionProcedure fc5476138d402953894afaaa5ff1a282, server=jenkins-hbase4.apache.org,33357,1690009829125 in 285 msec 2023-07-22 07:10:34,317 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=51, resume processing ppid=49 2023-07-22 07:10:34,318 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=51, ppid=49, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=fc5476138d402953894afaaa5ff1a282, ASSIGN in 457 msec 2023-07-22 07:10:34,318 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690009834318"}]},"ts":"1690009834318"} 2023-07-22 07:10:34,320 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=ENABLED in hbase:meta 2023-07-22 07:10:34,322 DEBUG [PEWorker-1] procedure.TruncateTableProcedure(145): truncate 'Group_testTableMoveTruncateAndDrop' completed 2023-07-22 07:10:34,331 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=49, state=SUCCESS; TruncateTableProcedure (table=Group_testTableMoveTruncateAndDrop preserveSplits=true) in 857 msec 2023-07-22 07:10:34,588 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.MasterRpcServices(1230): Checking to see if procedure is done pid=49 2023-07-22 07:10:34,588 INFO [Listener at localhost/46507] client.HBaseAdmin$TableFuture(3541): Operation: TRUNCATE, Table Name: default:Group_testTableMoveTruncateAndDrop, procId: 49 completed 2023-07-22 07:10:34,590 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=Group_testTableMoveTruncateAndDrop_130911739 2023-07-22 07:10:34,590 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-22 07:10:34,591 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=Group_testTableMoveTruncateAndDrop_130911739 2023-07-22 07:10:34,592 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-22 07:10:34,593 INFO [Listener at localhost/46507] client.HBaseAdmin$15(890): Started disable of Group_testTableMoveTruncateAndDrop 2023-07-22 07:10:34,593 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable Group_testTableMoveTruncateAndDrop 2023-07-22 07:10:34,595 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] procedure2.ProcedureExecutor(1029): Stored pid=60, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-22 07:10:34,598 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.MasterRpcServices(1230): Checking to see if procedure is done pid=60 2023-07-22 07:10:34,599 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690009834599"}]},"ts":"1690009834599"} 2023-07-22 07:10:34,601 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=DISABLING in hbase:meta 2023-07-22 07:10:34,604 INFO [PEWorker-4] procedure.DisableTableProcedure(293): Set Group_testTableMoveTruncateAndDrop to state=DISABLING 2023-07-22 07:10:34,605 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=61, ppid=60, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=8978337544dc8a712dcfa0c3594dbd17, UNASSIGN}, {pid=62, ppid=60, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=fc5476138d402953894afaaa5ff1a282, UNASSIGN}, {pid=63, ppid=60, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=da3a8b001d4b76a631121dfdaa851639, UNASSIGN}, {pid=64, ppid=60, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=13ae0d02ff860c7779b221b88913c7c5, UNASSIGN}, {pid=65, ppid=60, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=9322e82492ab3602e0fc027866777177, UNASSIGN}] 2023-07-22 07:10:34,608 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=61, ppid=60, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=8978337544dc8a712dcfa0c3594dbd17, UNASSIGN 2023-07-22 07:10:34,608 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=65, ppid=60, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=9322e82492ab3602e0fc027866777177, UNASSIGN 2023-07-22 07:10:34,608 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=64, ppid=60, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=13ae0d02ff860c7779b221b88913c7c5, UNASSIGN 2023-07-22 07:10:34,609 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=62, ppid=60, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=fc5476138d402953894afaaa5ff1a282, UNASSIGN 2023-07-22 07:10:34,609 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=61 updating hbase:meta row=8978337544dc8a712dcfa0c3594dbd17, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,33357,1690009829125 2023-07-22 07:10:34,609 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=63, ppid=60, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=da3a8b001d4b76a631121dfdaa851639, UNASSIGN 2023-07-22 07:10:34,609 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,,1690009833547.8978337544dc8a712dcfa0c3594dbd17.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1690009834609"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690009834609"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690009834609"}]},"ts":"1690009834609"} 2023-07-22 07:10:34,610 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=65 updating hbase:meta row=9322e82492ab3602e0fc027866777177, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,34133,1690009825283 2023-07-22 07:10:34,610 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1690009833547.9322e82492ab3602e0fc027866777177.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1690009834610"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690009834610"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690009834610"}]},"ts":"1690009834610"} 2023-07-22 07:10:34,610 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=64 updating hbase:meta row=13ae0d02ff860c7779b221b88913c7c5, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,33357,1690009829125 2023-07-22 07:10:34,610 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1690009833547.13ae0d02ff860c7779b221b88913c7c5.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690009834610"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690009834610"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690009834610"}]},"ts":"1690009834610"} 2023-07-22 07:10:34,611 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=62 updating hbase:meta row=fc5476138d402953894afaaa5ff1a282, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,33357,1690009829125 2023-07-22 07:10:34,611 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1690009833547.fc5476138d402953894afaaa5ff1a282.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690009834611"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690009834611"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690009834611"}]},"ts":"1690009834611"} 2023-07-22 07:10:34,611 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=63 updating hbase:meta row=da3a8b001d4b76a631121dfdaa851639, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,34133,1690009825283 2023-07-22 07:10:34,611 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1690009833547.da3a8b001d4b76a631121dfdaa851639.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690009834611"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690009834611"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690009834611"}]},"ts":"1690009834611"} 2023-07-22 07:10:34,612 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=66, ppid=61, state=RUNNABLE; CloseRegionProcedure 8978337544dc8a712dcfa0c3594dbd17, server=jenkins-hbase4.apache.org,33357,1690009829125}] 2023-07-22 07:10:34,615 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=67, ppid=64, state=RUNNABLE; CloseRegionProcedure 13ae0d02ff860c7779b221b88913c7c5, server=jenkins-hbase4.apache.org,33357,1690009829125}] 2023-07-22 07:10:34,615 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=68, ppid=65, state=RUNNABLE; CloseRegionProcedure 9322e82492ab3602e0fc027866777177, server=jenkins-hbase4.apache.org,34133,1690009825283}] 2023-07-22 07:10:34,616 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=69, ppid=62, state=RUNNABLE; CloseRegionProcedure fc5476138d402953894afaaa5ff1a282, server=jenkins-hbase4.apache.org,33357,1690009829125}] 2023-07-22 07:10:34,618 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=70, ppid=63, state=RUNNABLE; CloseRegionProcedure da3a8b001d4b76a631121dfdaa851639, server=jenkins-hbase4.apache.org,34133,1690009825283}] 2023-07-22 07:10:34,700 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.MasterRpcServices(1230): Checking to see if procedure is done pid=60 2023-07-22 07:10:34,766 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close fc5476138d402953894afaaa5ff1a282 2023-07-22 07:10:34,767 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing fc5476138d402953894afaaa5ff1a282, disabling compactions & flushes 2023-07-22 07:10:34,767 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,aaaaa,1690009833547.fc5476138d402953894afaaa5ff1a282. 2023-07-22 07:10:34,767 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1690009833547.fc5476138d402953894afaaa5ff1a282. 2023-07-22 07:10:34,767 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1690009833547.fc5476138d402953894afaaa5ff1a282. after waiting 0 ms 2023-07-22 07:10:34,767 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,aaaaa,1690009833547.fc5476138d402953894afaaa5ff1a282. 2023-07-22 07:10:34,769 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close da3a8b001d4b76a631121dfdaa851639 2023-07-22 07:10:34,770 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing da3a8b001d4b76a631121dfdaa851639, disabling compactions & flushes 2023-07-22 07:10:34,770 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690009833547.da3a8b001d4b76a631121dfdaa851639. 2023-07-22 07:10:34,770 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690009833547.da3a8b001d4b76a631121dfdaa851639. 2023-07-22 07:10:34,770 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690009833547.da3a8b001d4b76a631121dfdaa851639. after waiting 0 ms 2023-07-22 07:10:34,770 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690009833547.da3a8b001d4b76a631121dfdaa851639. 2023-07-22 07:10:34,774 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/data/default/Group_testTableMoveTruncateAndDrop/fc5476138d402953894afaaa5ff1a282/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-22 07:10:34,775 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,aaaaa,1690009833547.fc5476138d402953894afaaa5ff1a282. 2023-07-22 07:10:34,775 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for fc5476138d402953894afaaa5ff1a282: 2023-07-22 07:10:34,777 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed fc5476138d402953894afaaa5ff1a282 2023-07-22 07:10:34,778 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 8978337544dc8a712dcfa0c3594dbd17 2023-07-22 07:10:34,778 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=62 updating hbase:meta row=fc5476138d402953894afaaa5ff1a282, regionState=CLOSED 2023-07-22 07:10:34,779 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1690009833547.fc5476138d402953894afaaa5ff1a282.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690009834778"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690009834778"}]},"ts":"1690009834778"} 2023-07-22 07:10:34,783 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 8978337544dc8a712dcfa0c3594dbd17, disabling compactions & flushes 2023-07-22 07:10:34,783 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,,1690009833547.8978337544dc8a712dcfa0c3594dbd17. 2023-07-22 07:10:34,783 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,,1690009833547.8978337544dc8a712dcfa0c3594dbd17. 2023-07-22 07:10:34,784 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,,1690009833547.8978337544dc8a712dcfa0c3594dbd17. after waiting 0 ms 2023-07-22 07:10:34,784 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,,1690009833547.8978337544dc8a712dcfa0c3594dbd17. 2023-07-22 07:10:34,787 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=69, resume processing ppid=62 2023-07-22 07:10:34,787 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=69, ppid=62, state=SUCCESS; CloseRegionProcedure fc5476138d402953894afaaa5ff1a282, server=jenkins-hbase4.apache.org,33357,1690009829125 in 168 msec 2023-07-22 07:10:34,788 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=62, ppid=60, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=fc5476138d402953894afaaa5ff1a282, UNASSIGN in 182 msec 2023-07-22 07:10:34,794 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/data/default/Group_testTableMoveTruncateAndDrop/da3a8b001d4b76a631121dfdaa851639/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-22 07:10:34,797 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690009833547.da3a8b001d4b76a631121dfdaa851639. 2023-07-22 07:10:34,797 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for da3a8b001d4b76a631121dfdaa851639: 2023-07-22 07:10:34,800 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed da3a8b001d4b76a631121dfdaa851639 2023-07-22 07:10:34,800 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 9322e82492ab3602e0fc027866777177 2023-07-22 07:10:34,801 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 9322e82492ab3602e0fc027866777177, disabling compactions & flushes 2023-07-22 07:10:34,801 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,zzzzz,1690009833547.9322e82492ab3602e0fc027866777177. 2023-07-22 07:10:34,801 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1690009833547.9322e82492ab3602e0fc027866777177. 2023-07-22 07:10:34,802 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1690009833547.9322e82492ab3602e0fc027866777177. after waiting 0 ms 2023-07-22 07:10:34,802 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,zzzzz,1690009833547.9322e82492ab3602e0fc027866777177. 2023-07-22 07:10:34,803 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/data/default/Group_testTableMoveTruncateAndDrop/8978337544dc8a712dcfa0c3594dbd17/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-22 07:10:34,804 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=63 updating hbase:meta row=da3a8b001d4b76a631121dfdaa851639, regionState=CLOSED 2023-07-22 07:10:34,804 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1690009833547.da3a8b001d4b76a631121dfdaa851639.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690009834804"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690009834804"}]},"ts":"1690009834804"} 2023-07-22 07:10:34,805 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,,1690009833547.8978337544dc8a712dcfa0c3594dbd17. 2023-07-22 07:10:34,805 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 8978337544dc8a712dcfa0c3594dbd17: 2023-07-22 07:10:34,807 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 8978337544dc8a712dcfa0c3594dbd17 2023-07-22 07:10:34,807 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 13ae0d02ff860c7779b221b88913c7c5 2023-07-22 07:10:34,808 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 13ae0d02ff860c7779b221b88913c7c5, disabling compactions & flushes 2023-07-22 07:10:34,808 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690009833547.13ae0d02ff860c7779b221b88913c7c5. 2023-07-22 07:10:34,809 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690009833547.13ae0d02ff860c7779b221b88913c7c5. 2023-07-22 07:10:34,809 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690009833547.13ae0d02ff860c7779b221b88913c7c5. after waiting 0 ms 2023-07-22 07:10:34,809 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690009833547.13ae0d02ff860c7779b221b88913c7c5. 2023-07-22 07:10:34,811 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=61 updating hbase:meta row=8978337544dc8a712dcfa0c3594dbd17, regionState=CLOSED 2023-07-22 07:10:34,811 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,,1690009833547.8978337544dc8a712dcfa0c3594dbd17.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1690009834811"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690009834811"}]},"ts":"1690009834811"} 2023-07-22 07:10:34,813 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=70, resume processing ppid=63 2023-07-22 07:10:34,813 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=70, ppid=63, state=SUCCESS; CloseRegionProcedure da3a8b001d4b76a631121dfdaa851639, server=jenkins-hbase4.apache.org,34133,1690009825283 in 190 msec 2023-07-22 07:10:34,821 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/data/default/Group_testTableMoveTruncateAndDrop/9322e82492ab3602e0fc027866777177/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-22 07:10:34,821 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=63, ppid=60, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=da3a8b001d4b76a631121dfdaa851639, UNASSIGN in 208 msec 2023-07-22 07:10:34,822 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=66, resume processing ppid=61 2023-07-22 07:10:34,822 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=66, ppid=61, state=SUCCESS; CloseRegionProcedure 8978337544dc8a712dcfa0c3594dbd17, server=jenkins-hbase4.apache.org,33357,1690009829125 in 201 msec 2023-07-22 07:10:34,822 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,zzzzz,1690009833547.9322e82492ab3602e0fc027866777177. 2023-07-22 07:10:34,822 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 9322e82492ab3602e0fc027866777177: 2023-07-22 07:10:34,824 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=61, ppid=60, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=8978337544dc8a712dcfa0c3594dbd17, UNASSIGN in 217 msec 2023-07-22 07:10:34,825 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 9322e82492ab3602e0fc027866777177 2023-07-22 07:10:34,825 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/data/default/Group_testTableMoveTruncateAndDrop/13ae0d02ff860c7779b221b88913c7c5/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-22 07:10:34,825 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=65 updating hbase:meta row=9322e82492ab3602e0fc027866777177, regionState=CLOSED 2023-07-22 07:10:34,826 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1690009833547.9322e82492ab3602e0fc027866777177.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1690009834825"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690009834825"}]},"ts":"1690009834825"} 2023-07-22 07:10:34,826 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690009833547.13ae0d02ff860c7779b221b88913c7c5. 2023-07-22 07:10:34,826 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 13ae0d02ff860c7779b221b88913c7c5: 2023-07-22 07:10:34,828 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 13ae0d02ff860c7779b221b88913c7c5 2023-07-22 07:10:34,829 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=64 updating hbase:meta row=13ae0d02ff860c7779b221b88913c7c5, regionState=CLOSED 2023-07-22 07:10:34,829 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1690009833547.13ae0d02ff860c7779b221b88913c7c5.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690009834829"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690009834829"}]},"ts":"1690009834829"} 2023-07-22 07:10:34,830 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=68, resume processing ppid=65 2023-07-22 07:10:34,830 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=68, ppid=65, state=SUCCESS; CloseRegionProcedure 9322e82492ab3602e0fc027866777177, server=jenkins-hbase4.apache.org,34133,1690009825283 in 213 msec 2023-07-22 07:10:34,832 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=65, ppid=60, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=9322e82492ab3602e0fc027866777177, UNASSIGN in 225 msec 2023-07-22 07:10:34,832 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=67, resume processing ppid=64 2023-07-22 07:10:34,832 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=67, ppid=64, state=SUCCESS; CloseRegionProcedure 13ae0d02ff860c7779b221b88913c7c5, server=jenkins-hbase4.apache.org,33357,1690009829125 in 215 msec 2023-07-22 07:10:34,834 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=64, resume processing ppid=60 2023-07-22 07:10:34,834 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=64, ppid=60, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=13ae0d02ff860c7779b221b88913c7c5, UNASSIGN in 227 msec 2023-07-22 07:10:34,835 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690009834835"}]},"ts":"1690009834835"} 2023-07-22 07:10:34,837 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=DISABLED in hbase:meta 2023-07-22 07:10:34,839 INFO [PEWorker-1] procedure.DisableTableProcedure(305): Set Group_testTableMoveTruncateAndDrop to state=DISABLED 2023-07-22 07:10:34,841 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=60, state=SUCCESS; DisableTableProcedure table=Group_testTableMoveTruncateAndDrop in 247 msec 2023-07-22 07:10:34,901 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.MasterRpcServices(1230): Checking to see if procedure is done pid=60 2023-07-22 07:10:34,902 INFO [Listener at localhost/46507] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:Group_testTableMoveTruncateAndDrop, procId: 60 completed 2023-07-22 07:10:34,907 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.HMaster$5(2228): Client=jenkins//172.31.14.131 delete Group_testTableMoveTruncateAndDrop 2023-07-22 07:10:34,913 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] procedure2.ProcedureExecutor(1029): Stored pid=71, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-22 07:10:34,915 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=71, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-22 07:10:34,915 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 'Group_testTableMoveTruncateAndDrop' from rsgroup 'Group_testTableMoveTruncateAndDrop_130911739' 2023-07-22 07:10:34,916 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=71, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-22 07:10:34,918 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-22 07:10:34,918 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-22 07:10:34,919 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testTableMoveTruncateAndDrop_130911739 2023-07-22 07:10:34,920 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-22 07:10:34,928 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.MasterRpcServices(1230): Checking to see if procedure is done pid=71 2023-07-22 07:10:34,930 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/.tmp/data/default/Group_testTableMoveTruncateAndDrop/8978337544dc8a712dcfa0c3594dbd17 2023-07-22 07:10:34,930 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/.tmp/data/default/Group_testTableMoveTruncateAndDrop/9322e82492ab3602e0fc027866777177 2023-07-22 07:10:34,930 DEBUG [HFileArchiver-7] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/.tmp/data/default/Group_testTableMoveTruncateAndDrop/13ae0d02ff860c7779b221b88913c7c5 2023-07-22 07:10:34,930 DEBUG [HFileArchiver-8] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/.tmp/data/default/Group_testTableMoveTruncateAndDrop/da3a8b001d4b76a631121dfdaa851639 2023-07-22 07:10:34,930 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/.tmp/data/default/Group_testTableMoveTruncateAndDrop/fc5476138d402953894afaaa5ff1a282 2023-07-22 07:10:34,932 DEBUG [HFileArchiver-3] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/.tmp/data/default/Group_testTableMoveTruncateAndDrop/9322e82492ab3602e0fc027866777177/f, FileablePath, hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/.tmp/data/default/Group_testTableMoveTruncateAndDrop/9322e82492ab3602e0fc027866777177/recovered.edits] 2023-07-22 07:10:34,933 DEBUG [HFileArchiver-8] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/.tmp/data/default/Group_testTableMoveTruncateAndDrop/da3a8b001d4b76a631121dfdaa851639/f, FileablePath, hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/.tmp/data/default/Group_testTableMoveTruncateAndDrop/da3a8b001d4b76a631121dfdaa851639/recovered.edits] 2023-07-22 07:10:34,933 DEBUG [HFileArchiver-4] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/.tmp/data/default/Group_testTableMoveTruncateAndDrop/fc5476138d402953894afaaa5ff1a282/f, FileablePath, hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/.tmp/data/default/Group_testTableMoveTruncateAndDrop/fc5476138d402953894afaaa5ff1a282/recovered.edits] 2023-07-22 07:10:34,933 DEBUG [HFileArchiver-1] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/.tmp/data/default/Group_testTableMoveTruncateAndDrop/8978337544dc8a712dcfa0c3594dbd17/f, FileablePath, hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/.tmp/data/default/Group_testTableMoveTruncateAndDrop/8978337544dc8a712dcfa0c3594dbd17/recovered.edits] 2023-07-22 07:10:34,934 DEBUG [HFileArchiver-7] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/.tmp/data/default/Group_testTableMoveTruncateAndDrop/13ae0d02ff860c7779b221b88913c7c5/f, FileablePath, hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/.tmp/data/default/Group_testTableMoveTruncateAndDrop/13ae0d02ff860c7779b221b88913c7c5/recovered.edits] 2023-07-22 07:10:34,946 DEBUG [HFileArchiver-1] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/.tmp/data/default/Group_testTableMoveTruncateAndDrop/8978337544dc8a712dcfa0c3594dbd17/recovered.edits/4.seqid to hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/archive/data/default/Group_testTableMoveTruncateAndDrop/8978337544dc8a712dcfa0c3594dbd17/recovered.edits/4.seqid 2023-07-22 07:10:34,946 DEBUG [HFileArchiver-3] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/.tmp/data/default/Group_testTableMoveTruncateAndDrop/9322e82492ab3602e0fc027866777177/recovered.edits/4.seqid to hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/archive/data/default/Group_testTableMoveTruncateAndDrop/9322e82492ab3602e0fc027866777177/recovered.edits/4.seqid 2023-07-22 07:10:34,946 DEBUG [HFileArchiver-8] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/.tmp/data/default/Group_testTableMoveTruncateAndDrop/da3a8b001d4b76a631121dfdaa851639/recovered.edits/4.seqid to hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/archive/data/default/Group_testTableMoveTruncateAndDrop/da3a8b001d4b76a631121dfdaa851639/recovered.edits/4.seqid 2023-07-22 07:10:34,947 DEBUG [HFileArchiver-7] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/.tmp/data/default/Group_testTableMoveTruncateAndDrop/13ae0d02ff860c7779b221b88913c7c5/recovered.edits/4.seqid to hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/archive/data/default/Group_testTableMoveTruncateAndDrop/13ae0d02ff860c7779b221b88913c7c5/recovered.edits/4.seqid 2023-07-22 07:10:34,947 DEBUG [HFileArchiver-1] backup.HFileArchiver(596): Deleted hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/.tmp/data/default/Group_testTableMoveTruncateAndDrop/8978337544dc8a712dcfa0c3594dbd17 2023-07-22 07:10:34,947 DEBUG [HFileArchiver-4] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/.tmp/data/default/Group_testTableMoveTruncateAndDrop/fc5476138d402953894afaaa5ff1a282/recovered.edits/4.seqid to hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/archive/data/default/Group_testTableMoveTruncateAndDrop/fc5476138d402953894afaaa5ff1a282/recovered.edits/4.seqid 2023-07-22 07:10:34,947 DEBUG [HFileArchiver-3] backup.HFileArchiver(596): Deleted hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/.tmp/data/default/Group_testTableMoveTruncateAndDrop/9322e82492ab3602e0fc027866777177 2023-07-22 07:10:34,948 DEBUG [HFileArchiver-4] backup.HFileArchiver(596): Deleted hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/.tmp/data/default/Group_testTableMoveTruncateAndDrop/fc5476138d402953894afaaa5ff1a282 2023-07-22 07:10:34,948 DEBUG [HFileArchiver-7] backup.HFileArchiver(596): Deleted hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/.tmp/data/default/Group_testTableMoveTruncateAndDrop/13ae0d02ff860c7779b221b88913c7c5 2023-07-22 07:10:34,948 DEBUG [HFileArchiver-8] backup.HFileArchiver(596): Deleted hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/.tmp/data/default/Group_testTableMoveTruncateAndDrop/da3a8b001d4b76a631121dfdaa851639 2023-07-22 07:10:34,948 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(328): Archived Group_testTableMoveTruncateAndDrop regions 2023-07-22 07:10:34,952 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=71, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-22 07:10:34,963 WARN [PEWorker-2] procedure.DeleteTableProcedure(384): Deleting some vestigial 5 rows of Group_testTableMoveTruncateAndDrop from hbase:meta 2023-07-22 07:10:34,966 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(421): Removing 'Group_testTableMoveTruncateAndDrop' descriptor. 2023-07-22 07:10:34,968 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=71, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-22 07:10:34,968 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(411): Removing 'Group_testTableMoveTruncateAndDrop' from region states. 2023-07-22 07:10:34,968 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,,1690009833547.8978337544dc8a712dcfa0c3594dbd17.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1690009834968"}]},"ts":"9223372036854775807"} 2023-07-22 07:10:34,968 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1690009833547.fc5476138d402953894afaaa5ff1a282.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1690009834968"}]},"ts":"9223372036854775807"} 2023-07-22 07:10:34,968 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1690009833547.da3a8b001d4b76a631121dfdaa851639.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1690009834968"}]},"ts":"9223372036854775807"} 2023-07-22 07:10:34,968 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1690009833547.13ae0d02ff860c7779b221b88913c7c5.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1690009834968"}]},"ts":"9223372036854775807"} 2023-07-22 07:10:34,968 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1690009833547.9322e82492ab3602e0fc027866777177.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1690009834968"}]},"ts":"9223372036854775807"} 2023-07-22 07:10:34,970 INFO [PEWorker-2] hbase.MetaTableAccessor(1788): Deleted 5 regions from META 2023-07-22 07:10:34,970 DEBUG [PEWorker-2] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => 8978337544dc8a712dcfa0c3594dbd17, NAME => 'Group_testTableMoveTruncateAndDrop,,1690009833547.8978337544dc8a712dcfa0c3594dbd17.', STARTKEY => '', ENDKEY => 'aaaaa'}, {ENCODED => fc5476138d402953894afaaa5ff1a282, NAME => 'Group_testTableMoveTruncateAndDrop,aaaaa,1690009833547.fc5476138d402953894afaaa5ff1a282.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'}, {ENCODED => da3a8b001d4b76a631121dfdaa851639, NAME => 'Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690009833547.da3a8b001d4b76a631121dfdaa851639.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'}, {ENCODED => 13ae0d02ff860c7779b221b88913c7c5, NAME => 'Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690009833547.13ae0d02ff860c7779b221b88913c7c5.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'}, {ENCODED => 9322e82492ab3602e0fc027866777177, NAME => 'Group_testTableMoveTruncateAndDrop,zzzzz,1690009833547.9322e82492ab3602e0fc027866777177.', STARTKEY => 'zzzzz', ENDKEY => ''}] 2023-07-22 07:10:34,970 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(415): Marking 'Group_testTableMoveTruncateAndDrop' as deleted. 2023-07-22 07:10:34,970 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1690009834970"}]},"ts":"9223372036854775807"} 2023-07-22 07:10:34,978 INFO [PEWorker-2] hbase.MetaTableAccessor(1658): Deleted table Group_testTableMoveTruncateAndDrop state from META 2023-07-22 07:10:34,981 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(130): Finished pid=71, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-22 07:10:34,983 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=71, state=SUCCESS; DeleteTableProcedure table=Group_testTableMoveTruncateAndDrop in 73 msec 2023-07-22 07:10:35,029 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.MasterRpcServices(1230): Checking to see if procedure is done pid=71 2023-07-22 07:10:35,030 INFO [Listener at localhost/46507] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: default:Group_testTableMoveTruncateAndDrop, procId: 71 completed 2023-07-22 07:10:35,031 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=Group_testTableMoveTruncateAndDrop_130911739 2023-07-22 07:10:35,031 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-22 07:10:35,037 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-22 07:10:35,037 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-22 07:10:35,038 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-22 07:10:35,039 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-22 07:10:35,039 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-22 07:10:35,040 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-22 07:10:35,040 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-22 07:10:35,046 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-22 07:10:35,051 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-22 07:10:35,051 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testTableMoveTruncateAndDrop_130911739 2023-07-22 07:10:35,052 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-22 07:10:35,053 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-22 07:10:35,055 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-22 07:10:35,055 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-22 07:10:35,055 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-22 07:10:35,056 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:34133, jenkins-hbase4.apache.org:33357] to rsgroup default 2023-07-22 07:10:35,059 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-22 07:10:35,060 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testTableMoveTruncateAndDrop_130911739 2023-07-22 07:10:35,061 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-22 07:10:35,066 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group Group_testTableMoveTruncateAndDrop_130911739, current retry=0 2023-07-22 07:10:35,066 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,33357,1690009829125, jenkins-hbase4.apache.org,34133,1690009825283] are moved back to Group_testTableMoveTruncateAndDrop_130911739 2023-07-22 07:10:35,066 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminServer(438): Move servers done: Group_testTableMoveTruncateAndDrop_130911739 => default 2023-07-22 07:10:35,066 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-22 07:10:35,068 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup Group_testTableMoveTruncateAndDrop_130911739 2023-07-22 07:10:35,072 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-22 07:10:35,072 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-22 07:10:35,074 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-22 07:10:35,079 INFO [Listener at localhost/46507] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-22 07:10:35,080 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-22 07:10:35,084 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-22 07:10:35,084 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-22 07:10:35,085 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-22 07:10:35,088 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-22 07:10:35,098 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-22 07:10:35,098 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-22 07:10:35,101 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:37061] to rsgroup master 2023-07-22 07:10:35,101 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37061 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-22 07:10:35,101 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] ipc.CallRunner(144): callId: 150 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:38908 deadline: 1690011035101, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37061 is either offline or it does not exist. 2023-07-22 07:10:35,102 WARN [Listener at localhost/46507] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37061 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37061 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-22 07:10:35,104 INFO [Listener at localhost/46507] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-22 07:10:35,105 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-22 07:10:35,105 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-22 07:10:35,105 INFO [Listener at localhost/46507] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:33357, jenkins-hbase4.apache.org:34133, jenkins-hbase4.apache.org:39057, jenkins-hbase4.apache.org:41787], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-22 07:10:35,106 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-22 07:10:35,107 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-22 07:10:35,135 INFO [Listener at localhost/46507] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testTableMoveTruncateAndDrop Thread=494 (was 419) Potentially hanging thread: HFileArchiver-4 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1558959492_17 at /127.0.0.1:59426 [Receiving block BP-1233006246-172.31.14.131-1690009819581:blk_1073741840_1016] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-7-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (56390283) connection to localhost/127.0.0.1:40817 from jenkins.hfs.3 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: HFileArchiver-8 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1233006246-172.31.14.131-1690009819581:blk_1073741840_1016, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1233006246-172.31.14.131-1690009819581:blk_1073741840_1016, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-7-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:3;jenkins-hbase4:33357-longCompactions-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.util.StealJobQueue.take(StealJobQueue.java:101) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-3-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.metaPriority.FPBQ.Fifo.handler=0,queue=0,port=33357 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp1208810196-635-acceptor-0@266bc00-ServerConnector@702b003c{HTTP/1.1, (http/1.1)}{0.0.0.0:38675} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666-prefix:jenkins-hbase4.apache.org,33357,1690009829125 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-7 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x50c4626a-shared-pool-9 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x2e79eb29-shared-pool-4 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=33357 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=33357 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: HFileArchiver-3 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33357 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins.hfs.3@localhost:40817 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RSProcedureDispatcher-pool-2 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1208810196-638 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1233006246-172.31.14.131-1690009819581:blk_1073741840_1016, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x2e79eb29-shared-pool-3 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1208810196-639 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x50c4626a-shared-pool-8 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: HFileArchiver-7 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-7-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=33357 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=33357 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp1208810196-640 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x50c4626a-shared-pool-11 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: jenkins-hbase4:33357Replication Statistics #0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:56256@0x13fbea4d-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: HFileArchiver-5 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:56256@0x13fbea4d-SendThread(127.0.0.1:56256) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: qtp1208810196-637 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1558959492_17 at /127.0.0.1:55390 [Receiving block BP-1233006246-172.31.14.131-1690009819581:blk_1073741840_1016] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=33357 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RS-EventLoopGroup-3-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer for 'HBase' metrics system java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: Session-HouseKeeper-34788a9-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1208810196-641 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33357 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp1208810196-636 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-3 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:3;jenkins-hbase4:33357 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:81) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:64) org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1092) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.runRegionServer(MiniHBaseCluster.java:175) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.access$000(MiniHBaseCluster.java:123) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer$1.run(MiniHBaseCluster.java:159) java.security.AccessController.doPrivileged(Native Method) javax.security.auth.Subject.doAs(Subject.java:360) org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1873) org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:319) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.run(MiniHBaseCluster.java:156) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x50c4626a-shared-pool-6 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1558959492_17 at /127.0.0.1:48442 [Receiving block BP-1233006246-172.31.14.131-1690009819581:blk_1073741840_1016] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1241897708_17 at /127.0.0.1:34334 [Waiting for operation #2] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1241897708_17 at /127.0.0.1:59412 [Waiting for operation #5] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=33357 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=33357 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-8 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-5 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-6 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: HFileArchiver-6 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-4 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:56256@0x13fbea4d sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/1353063304.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x50c4626a-shared-pool-10 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1558959492_17 at /127.0.0.1:60292 [Waiting for operation #3] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1208810196-634 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/737117775.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x50c4626a-shared-pool-7 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=772 (was 675) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=404 (was 361) - SystemLoadAverage LEAK? -, ProcessCount=180 (was 180), AvailableMemoryMB=7112 (was 7734) 2023-07-22 07:10:35,157 INFO [Listener at localhost/46507] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testValidGroupNames Thread=494, OpenFileDescriptor=772, MaxFileDescriptor=60000, SystemLoadAverage=404, ProcessCount=180, AvailableMemoryMB=7110 2023-07-22 07:10:35,158 INFO [Listener at localhost/46507] rsgroup.TestRSGroupsBase(132): testValidGroupNames 2023-07-22 07:10:35,164 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-22 07:10:35,164 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-22 07:10:35,166 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-22 07:10:35,166 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-22 07:10:35,166 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-22 07:10:35,167 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-22 07:10:35,167 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-22 07:10:35,168 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-22 07:10:35,173 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-22 07:10:35,174 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-22 07:10:35,175 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-22 07:10:35,179 INFO [Listener at localhost/46507] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-22 07:10:35,180 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-22 07:10:35,187 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-22 07:10:35,188 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-22 07:10:35,190 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-22 07:10:35,192 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-22 07:10:35,195 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-22 07:10:35,195 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-22 07:10:35,198 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:37061] to rsgroup master 2023-07-22 07:10:35,198 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37061 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-22 07:10:35,198 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] ipc.CallRunner(144): callId: 178 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:38908 deadline: 1690011035198, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37061 is either offline or it does not exist. 2023-07-22 07:10:35,198 WARN [Listener at localhost/46507] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37061 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37061 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-22 07:10:35,200 INFO [Listener at localhost/46507] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-22 07:10:35,201 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-22 07:10:35,201 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-22 07:10:35,201 INFO [Listener at localhost/46507] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:33357, jenkins-hbase4.apache.org:34133, jenkins-hbase4.apache.org:39057, jenkins-hbase4.apache.org:41787], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-22 07:10:35,202 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-22 07:10:35,202 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-22 07:10:35,203 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup foo* 2023-07-22 07:10:35,203 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup name should only contain alphanumeric characters at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.checkGroupName(RSGroupInfoManagerImpl.java:932) at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.addRSGroup(RSGroupInfoManagerImpl.java:205) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.addRSGroup(RSGroupAdminServer.java:476) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.addRSGroup(RSGroupAdminEndpoint.java:258) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16203) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-22 07:10:35,203 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] ipc.CallRunner(144): callId: 184 service: MasterService methodName: ExecMasterService size: 83 connection: 172.31.14.131:38908 deadline: 1690011035203, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup name should only contain alphanumeric characters 2023-07-22 07:10:35,204 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup foo@ 2023-07-22 07:10:35,205 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup name should only contain alphanumeric characters at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.checkGroupName(RSGroupInfoManagerImpl.java:932) at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.addRSGroup(RSGroupInfoManagerImpl.java:205) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.addRSGroup(RSGroupAdminServer.java:476) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.addRSGroup(RSGroupAdminEndpoint.java:258) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16203) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-22 07:10:35,205 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] ipc.CallRunner(144): callId: 186 service: MasterService methodName: ExecMasterService size: 83 connection: 172.31.14.131:38908 deadline: 1690011035204, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup name should only contain alphanumeric characters 2023-07-22 07:10:35,206 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup - 2023-07-22 07:10:35,206 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup name should only contain alphanumeric characters at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.checkGroupName(RSGroupInfoManagerImpl.java:932) at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.addRSGroup(RSGroupInfoManagerImpl.java:205) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.addRSGroup(RSGroupAdminServer.java:476) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.addRSGroup(RSGroupAdminEndpoint.java:258) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16203) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-22 07:10:35,206 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] ipc.CallRunner(144): callId: 188 service: MasterService methodName: ExecMasterService size: 80 connection: 172.31.14.131:38908 deadline: 1690011035206, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup name should only contain alphanumeric characters 2023-07-22 07:10:35,207 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup foo_123 2023-07-22 07:10:35,209 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/foo_123 2023-07-22 07:10:35,212 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-22 07:10:35,213 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-22 07:10:35,213 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-22 07:10:35,214 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-22 07:10:35,217 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-22 07:10:35,217 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-22 07:10:35,223 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-22 07:10:35,223 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-22 07:10:35,224 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-22 07:10:35,224 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-22 07:10:35,224 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-22 07:10:35,225 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-22 07:10:35,225 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-22 07:10:35,226 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup foo_123 2023-07-22 07:10:35,229 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-22 07:10:35,230 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-22 07:10:35,230 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-22 07:10:35,232 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-22 07:10:35,232 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-22 07:10:35,233 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-22 07:10:35,233 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-22 07:10:35,234 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-22 07:10:35,234 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-22 07:10:35,235 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-22 07:10:35,239 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-22 07:10:35,240 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-22 07:10:35,241 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-22 07:10:35,245 INFO [Listener at localhost/46507] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-22 07:10:35,246 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-22 07:10:35,248 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-22 07:10:35,248 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-22 07:10:35,251 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-22 07:10:35,256 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-22 07:10:35,258 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-22 07:10:35,259 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-22 07:10:35,261 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:37061] to rsgroup master 2023-07-22 07:10:35,261 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37061 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-22 07:10:35,261 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] ipc.CallRunner(144): callId: 222 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:38908 deadline: 1690011035261, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37061 is either offline or it does not exist. 2023-07-22 07:10:35,261 WARN [Listener at localhost/46507] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37061 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37061 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-22 07:10:35,263 INFO [Listener at localhost/46507] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-22 07:10:35,264 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-22 07:10:35,264 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-22 07:10:35,265 INFO [Listener at localhost/46507] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:33357, jenkins-hbase4.apache.org:34133, jenkins-hbase4.apache.org:39057, jenkins-hbase4.apache.org:41787], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-22 07:10:35,265 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-22 07:10:35,265 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-22 07:10:35,282 INFO [Listener at localhost/46507] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testValidGroupNames Thread=497 (was 494) Potentially hanging thread: hconnection-0x2e79eb29-shared-pool-6 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x2e79eb29-shared-pool-5 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x2e79eb29-shared-pool-7 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=772 (was 772), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=404 (was 404), ProcessCount=180 (was 180), AvailableMemoryMB=7106 (was 7110) 2023-07-22 07:10:35,299 INFO [Listener at localhost/46507] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testFailRemoveGroup Thread=497, OpenFileDescriptor=772, MaxFileDescriptor=60000, SystemLoadAverage=404, ProcessCount=180, AvailableMemoryMB=7105 2023-07-22 07:10:35,299 INFO [Listener at localhost/46507] rsgroup.TestRSGroupsBase(132): testFailRemoveGroup 2023-07-22 07:10:35,304 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-22 07:10:35,305 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-22 07:10:35,305 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-22 07:10:35,306 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-22 07:10:35,306 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-22 07:10:35,306 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-22 07:10:35,306 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-22 07:10:35,307 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-22 07:10:35,311 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-22 07:10:35,312 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-22 07:10:35,313 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-22 07:10:35,316 INFO [Listener at localhost/46507] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-22 07:10:35,317 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-22 07:10:35,319 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-22 07:10:35,319 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-22 07:10:35,321 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-22 07:10:35,323 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-22 07:10:35,326 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-22 07:10:35,326 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-22 07:10:35,328 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:37061] to rsgroup master 2023-07-22 07:10:35,328 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37061 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-22 07:10:35,328 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] ipc.CallRunner(144): callId: 250 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:38908 deadline: 1690011035328, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37061 is either offline or it does not exist. 2023-07-22 07:10:35,329 WARN [Listener at localhost/46507] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37061 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37061 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-22 07:10:35,330 INFO [Listener at localhost/46507] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-22 07:10:35,331 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-22 07:10:35,331 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-22 07:10:35,332 INFO [Listener at localhost/46507] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:33357, jenkins-hbase4.apache.org:34133, jenkins-hbase4.apache.org:39057, jenkins-hbase4.apache.org:41787], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-22 07:10:35,333 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-22 07:10:35,333 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-22 07:10:35,334 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-22 07:10:35,334 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-22 07:10:35,335 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-22 07:10:35,335 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-22 07:10:35,336 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup bar 2023-07-22 07:10:35,341 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-22 07:10:35,341 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/bar 2023-07-22 07:10:35,344 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-22 07:10:35,344 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-22 07:10:35,345 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-22 07:10:35,349 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-22 07:10:35,349 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-22 07:10:35,351 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:39057, jenkins-hbase4.apache.org:34133, jenkins-hbase4.apache.org:33357] to rsgroup bar 2023-07-22 07:10:35,354 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-22 07:10:35,354 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/bar 2023-07-22 07:10:35,355 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-22 07:10:35,355 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-22 07:10:35,357 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminServer(238): Moving server region 1588230740, which do not belong to RSGroup bar 2023-07-22 07:10:35,358 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] procedure2.ProcedureExecutor(1029): Stored pid=72, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, REOPEN/MOVE 2023-07-22 07:10:35,358 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminServer(286): Moving 1 region(s) to group default, current retry=0 2023-07-22 07:10:35,359 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=72, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, REOPEN/MOVE 2023-07-22 07:10:35,360 INFO [PEWorker-5] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,39057,1690009825637, state=CLOSING 2023-07-22 07:10:35,361 DEBUG [Listener at localhost/46507-EventThread] zookeeper.ZKWatcher(600): master:37061-0x1018bdde7740000, quorum=127.0.0.1:56256, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-07-22 07:10:35,362 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-22 07:10:35,362 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=73, ppid=72, state=RUNNABLE; CloseRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,39057,1690009825637}] 2023-07-22 07:10:35,517 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 1588230740 2023-07-22 07:10:35,518 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-22 07:10:35,518 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-22 07:10:35,518 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-22 07:10:35,518 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-22 07:10:35,518 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-22 07:10:35,519 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=40.09 KB heapSize=61.91 KB 2023-07-22 07:10:35,604 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=37.02 KB at sequenceid=90 (bloomFilter=false), to=hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/data/hbase/meta/1588230740/.tmp/info/92c73741de0545db9421fdc876dbed8a 2023-07-22 07:10:35,648 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 92c73741de0545db9421fdc876dbed8a 2023-07-22 07:10:35,686 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=1.15 KB at sequenceid=90 (bloomFilter=false), to=hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/data/hbase/meta/1588230740/.tmp/rep_barrier/fd499a837a084d17a197e0d63e7d3694 2023-07-22 07:10:35,695 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for fd499a837a084d17a197e0d63e7d3694 2023-07-22 07:10:35,716 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=1.91 KB at sequenceid=90 (bloomFilter=false), to=hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/data/hbase/meta/1588230740/.tmp/table/e64c50ee0c754cad8c38fe9db4656a44 2023-07-22 07:10:35,727 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for e64c50ee0c754cad8c38fe9db4656a44 2023-07-22 07:10:35,729 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/data/hbase/meta/1588230740/.tmp/info/92c73741de0545db9421fdc876dbed8a as hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/data/hbase/meta/1588230740/info/92c73741de0545db9421fdc876dbed8a 2023-07-22 07:10:35,737 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 92c73741de0545db9421fdc876dbed8a 2023-07-22 07:10:35,737 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/data/hbase/meta/1588230740/info/92c73741de0545db9421fdc876dbed8a, entries=40, sequenceid=90, filesize=9.4 K 2023-07-22 07:10:35,739 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/data/hbase/meta/1588230740/.tmp/rep_barrier/fd499a837a084d17a197e0d63e7d3694 as hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/data/hbase/meta/1588230740/rep_barrier/fd499a837a084d17a197e0d63e7d3694 2023-07-22 07:10:35,747 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for fd499a837a084d17a197e0d63e7d3694 2023-07-22 07:10:35,747 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/data/hbase/meta/1588230740/rep_barrier/fd499a837a084d17a197e0d63e7d3694, entries=10, sequenceid=90, filesize=6.1 K 2023-07-22 07:10:35,748 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/data/hbase/meta/1588230740/.tmp/table/e64c50ee0c754cad8c38fe9db4656a44 as hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/data/hbase/meta/1588230740/table/e64c50ee0c754cad8c38fe9db4656a44 2023-07-22 07:10:35,756 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for e64c50ee0c754cad8c38fe9db4656a44 2023-07-22 07:10:35,757 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/data/hbase/meta/1588230740/table/e64c50ee0c754cad8c38fe9db4656a44, entries=15, sequenceid=90, filesize=6.2 K 2023-07-22 07:10:35,758 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~40.09 KB/41051, heapSize ~61.87 KB/63352, currentSize=0 B/0 for 1588230740 in 239ms, sequenceid=90, compaction requested=false 2023-07-22 07:10:35,772 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/data/hbase/meta/1588230740/recovered.edits/93.seqid, newMaxSeqId=93, maxSeqId=1 2023-07-22 07:10:35,773 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-22 07:10:35,774 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-22 07:10:35,774 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-22 07:10:35,774 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 1588230740 move to jenkins-hbase4.apache.org,41787,1690009825478 record at close sequenceid=90 2023-07-22 07:10:35,776 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 1588230740 2023-07-22 07:10:35,777 WARN [PEWorker-3] zookeeper.MetaTableLocator(225): Tried to set null ServerName in hbase:meta; skipping -- ServerName required 2023-07-22 07:10:35,779 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=73, resume processing ppid=72 2023-07-22 07:10:35,779 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=73, ppid=72, state=SUCCESS; CloseRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,39057,1690009825637 in 415 msec 2023-07-22 07:10:35,780 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=72, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:meta, region=1588230740, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,41787,1690009825478; forceNewPlan=false, retain=false 2023-07-22 07:10:35,931 INFO [PEWorker-2] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,41787,1690009825478, state=OPENING 2023-07-22 07:10:35,932 DEBUG [Listener at localhost/46507-EventThread] zookeeper.ZKWatcher(600): master:37061-0x1018bdde7740000, quorum=127.0.0.1:56256, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-07-22 07:10:35,932 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=74, ppid=72, state=RUNNABLE; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,41787,1690009825478}] 2023-07-22 07:10:35,932 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-22 07:10:36,089 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:meta,,1.1588230740 2023-07-22 07:10:36,089 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-22 07:10:36,091 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C41787%2C1690009825478.meta, suffix=.meta, logDir=hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/WALs/jenkins-hbase4.apache.org,41787,1690009825478, archiveDir=hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/oldWALs, maxLogs=32 2023-07-22 07:10:36,113 DEBUG [RS-EventLoopGroup-7-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:37967,DS-d5e4f1ef-f5da-4f19-a9eb-a9289adfcfe4,DISK] 2023-07-22 07:10:36,115 DEBUG [RS-EventLoopGroup-7-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:40309,DS-d667cf8d-eebc-4ef1-951e-6ae73b21b74c,DISK] 2023-07-22 07:10:36,115 DEBUG [RS-EventLoopGroup-7-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:42555,DS-3ac6edaa-2267-467e-8da8-ff002dee7b14,DISK] 2023-07-22 07:10:36,118 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/WALs/jenkins-hbase4.apache.org,41787,1690009825478/jenkins-hbase4.apache.org%2C41787%2C1690009825478.meta.1690009836092.meta 2023-07-22 07:10:36,119 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:40309,DS-d667cf8d-eebc-4ef1-951e-6ae73b21b74c,DISK], DatanodeInfoWithStorage[127.0.0.1:37967,DS-d5e4f1ef-f5da-4f19-a9eb-a9289adfcfe4,DISK], DatanodeInfoWithStorage[127.0.0.1:42555,DS-3ac6edaa-2267-467e-8da8-ff002dee7b14,DISK]] 2023-07-22 07:10:36,119 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2023-07-22 07:10:36,119 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-22 07:10:36,119 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService 2023-07-22 07:10:36,119 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2023-07-22 07:10:36,120 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table meta 1588230740 2023-07-22 07:10:36,120 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-22 07:10:36,120 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 1588230740 2023-07-22 07:10:36,120 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 1588230740 2023-07-22 07:10:36,121 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-22 07:10:36,123 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/data/hbase/meta/1588230740/info 2023-07-22 07:10:36,123 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/data/hbase/meta/1588230740/info 2023-07-22 07:10:36,123 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-22 07:10:36,137 INFO [StoreFileOpener-info-1] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 92c73741de0545db9421fdc876dbed8a 2023-07-22 07:10:36,137 DEBUG [StoreOpener-1588230740-1] regionserver.HStore(539): loaded hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/data/hbase/meta/1588230740/info/92c73741de0545db9421fdc876dbed8a 2023-07-22 07:10:36,138 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-22 07:10:36,138 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-22 07:10:36,139 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/data/hbase/meta/1588230740/rep_barrier 2023-07-22 07:10:36,139 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/data/hbase/meta/1588230740/rep_barrier 2023-07-22 07:10:36,139 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-22 07:10:36,149 INFO [StoreFileOpener-rep_barrier-1] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for fd499a837a084d17a197e0d63e7d3694 2023-07-22 07:10:36,149 DEBUG [StoreOpener-1588230740-1] regionserver.HStore(539): loaded hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/data/hbase/meta/1588230740/rep_barrier/fd499a837a084d17a197e0d63e7d3694 2023-07-22 07:10:36,149 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-22 07:10:36,150 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-22 07:10:36,151 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/data/hbase/meta/1588230740/table 2023-07-22 07:10:36,151 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/data/hbase/meta/1588230740/table 2023-07-22 07:10:36,152 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-22 07:10:36,159 INFO [StoreFileOpener-table-1] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for e64c50ee0c754cad8c38fe9db4656a44 2023-07-22 07:10:36,160 DEBUG [StoreOpener-1588230740-1] regionserver.HStore(539): loaded hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/data/hbase/meta/1588230740/table/e64c50ee0c754cad8c38fe9db4656a44 2023-07-22 07:10:36,160 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-22 07:10:36,161 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/data/hbase/meta/1588230740 2023-07-22 07:10:36,162 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/data/hbase/meta/1588230740 2023-07-22 07:10:36,165 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-22 07:10:36,166 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-22 07:10:36,167 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=94; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9927683840, jitterRate=-0.07541239261627197}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-22 07:10:36,167 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-22 07:10:36,168 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:meta,,1.1588230740, pid=74, masterSystemTime=1690009836085 2023-07-22 07:10:36,170 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:meta,,1.1588230740 2023-07-22 07:10:36,170 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:meta,,1.1588230740 2023-07-22 07:10:36,171 INFO [PEWorker-4] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,41787,1690009825478, state=OPEN 2023-07-22 07:10:36,172 DEBUG [Listener at localhost/46507-EventThread] zookeeper.ZKWatcher(600): master:37061-0x1018bdde7740000, quorum=127.0.0.1:56256, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-07-22 07:10:36,172 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-22 07:10:36,174 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=74, resume processing ppid=72 2023-07-22 07:10:36,174 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=74, ppid=72, state=SUCCESS; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,41787,1690009825478 in 240 msec 2023-07-22 07:10:36,175 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=72, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, REOPEN/MOVE in 817 msec 2023-07-22 07:10:36,359 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] procedure.ProcedureSyncWait(216): waitFor pid=72 2023-07-22 07:10:36,360 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,33357,1690009829125, jenkins-hbase4.apache.org,34133,1690009825283, jenkins-hbase4.apache.org,39057,1690009825637] are moved back to default 2023-07-22 07:10:36,360 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminServer(438): Move servers done: default => bar 2023-07-22 07:10:36,360 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-22 07:10:36,363 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-22 07:10:36,363 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-22 07:10:36,366 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=bar 2023-07-22 07:10:36,366 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-22 07:10:36,368 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'Group_testFailRemoveGroup', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-22 07:10:36,369 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] procedure2.ProcedureExecutor(1029): Stored pid=75, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=Group_testFailRemoveGroup 2023-07-22 07:10:36,371 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=75, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=Group_testFailRemoveGroup execute state=CREATE_TABLE_PRE_OPERATION 2023-07-22 07:10:36,371 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "Group_testFailRemoveGroup" procId is: 75 2023-07-22 07:10:36,372 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.MasterRpcServices(1230): Checking to see if procedure is done pid=75 2023-07-22 07:10:36,374 DEBUG [PEWorker-3] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-22 07:10:36,375 DEBUG [PEWorker-3] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/bar 2023-07-22 07:10:36,375 DEBUG [PEWorker-3] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-22 07:10:36,375 DEBUG [PEWorker-3] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-22 07:10:36,379 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=75, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=Group_testFailRemoveGroup execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-22 07:10:36,380 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=39057] ipc.CallRunner(144): callId: 176 service: ClientService methodName: Get size: 142 connection: 172.31.14.131:44332 deadline: 1690009896380, exception=org.apache.hadoop.hbase.exceptions.RegionMovedException: Region moved to: hostname=jenkins-hbase4.apache.org port=41787 startCode=1690009825478. As of locationSeqNum=90. 2023-07-22 07:10:36,473 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.MasterRpcServices(1230): Checking to see if procedure is done pid=75 2023-07-22 07:10:36,488 DEBUG [HFileArchiver-5] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/.tmp/data/default/Group_testFailRemoveGroup/cc9a49a5714433c50e2c0e1046710cb8 2023-07-22 07:10:36,489 DEBUG [HFileArchiver-5] backup.HFileArchiver(153): Directory hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/.tmp/data/default/Group_testFailRemoveGroup/cc9a49a5714433c50e2c0e1046710cb8 empty. 2023-07-22 07:10:36,490 DEBUG [HFileArchiver-5] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/.tmp/data/default/Group_testFailRemoveGroup/cc9a49a5714433c50e2c0e1046710cb8 2023-07-22 07:10:36,490 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived Group_testFailRemoveGroup regions 2023-07-22 07:10:36,516 DEBUG [PEWorker-3] util.FSTableDescriptors(570): Wrote into hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/.tmp/data/default/Group_testFailRemoveGroup/.tabledesc/.tableinfo.0000000001 2023-07-22 07:10:36,518 INFO [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(7675): creating {ENCODED => cc9a49a5714433c50e2c0e1046710cb8, NAME => 'Group_testFailRemoveGroup,,1690009836368.cc9a49a5714433c50e2c0e1046710cb8.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='Group_testFailRemoveGroup', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/.tmp 2023-07-22 07:10:36,538 DEBUG [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(866): Instantiated Group_testFailRemoveGroup,,1690009836368.cc9a49a5714433c50e2c0e1046710cb8.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-22 07:10:36,538 DEBUG [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(1604): Closing cc9a49a5714433c50e2c0e1046710cb8, disabling compactions & flushes 2023-07-22 07:10:36,538 INFO [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(1626): Closing region Group_testFailRemoveGroup,,1690009836368.cc9a49a5714433c50e2c0e1046710cb8. 2023-07-22 07:10:36,538 DEBUG [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testFailRemoveGroup,,1690009836368.cc9a49a5714433c50e2c0e1046710cb8. 2023-07-22 07:10:36,538 DEBUG [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_testFailRemoveGroup,,1690009836368.cc9a49a5714433c50e2c0e1046710cb8. after waiting 0 ms 2023-07-22 07:10:36,538 DEBUG [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_testFailRemoveGroup,,1690009836368.cc9a49a5714433c50e2c0e1046710cb8. 2023-07-22 07:10:36,538 INFO [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(1838): Closed Group_testFailRemoveGroup,,1690009836368.cc9a49a5714433c50e2c0e1046710cb8. 2023-07-22 07:10:36,538 DEBUG [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(1558): Region close journal for cc9a49a5714433c50e2c0e1046710cb8: 2023-07-22 07:10:36,541 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=75, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=Group_testFailRemoveGroup execute state=CREATE_TABLE_ADD_TO_META 2023-07-22 07:10:36,542 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testFailRemoveGroup,,1690009836368.cc9a49a5714433c50e2c0e1046710cb8.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1690009836542"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690009836542"}]},"ts":"1690009836542"} 2023-07-22 07:10:36,544 INFO [PEWorker-3] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-22 07:10:36,545 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=75, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=Group_testFailRemoveGroup execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-22 07:10:36,545 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testFailRemoveGroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690009836545"}]},"ts":"1690009836545"} 2023-07-22 07:10:36,549 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=Group_testFailRemoveGroup, state=ENABLING in hbase:meta 2023-07-22 07:10:36,556 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=76, ppid=75, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=cc9a49a5714433c50e2c0e1046710cb8, ASSIGN}] 2023-07-22 07:10:36,558 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=76, ppid=75, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=cc9a49a5714433c50e2c0e1046710cb8, ASSIGN 2023-07-22 07:10:36,559 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=76, ppid=75, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=cc9a49a5714433c50e2c0e1046710cb8, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,41787,1690009825478; forceNewPlan=false, retain=false 2023-07-22 07:10:36,675 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.MasterRpcServices(1230): Checking to see if procedure is done pid=75 2023-07-22 07:10:36,711 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=76 updating hbase:meta row=cc9a49a5714433c50e2c0e1046710cb8, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,41787,1690009825478 2023-07-22 07:10:36,711 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testFailRemoveGroup,,1690009836368.cc9a49a5714433c50e2c0e1046710cb8.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1690009836710"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690009836710"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690009836710"}]},"ts":"1690009836710"} 2023-07-22 07:10:36,715 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=77, ppid=76, state=RUNNABLE; OpenRegionProcedure cc9a49a5714433c50e2c0e1046710cb8, server=jenkins-hbase4.apache.org,41787,1690009825478}] 2023-07-22 07:10:36,873 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testFailRemoveGroup,,1690009836368.cc9a49a5714433c50e2c0e1046710cb8. 2023-07-22 07:10:36,873 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => cc9a49a5714433c50e2c0e1046710cb8, NAME => 'Group_testFailRemoveGroup,,1690009836368.cc9a49a5714433c50e2c0e1046710cb8.', STARTKEY => '', ENDKEY => ''} 2023-07-22 07:10:36,874 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testFailRemoveGroup cc9a49a5714433c50e2c0e1046710cb8 2023-07-22 07:10:36,874 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testFailRemoveGroup,,1690009836368.cc9a49a5714433c50e2c0e1046710cb8.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-22 07:10:36,874 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for cc9a49a5714433c50e2c0e1046710cb8 2023-07-22 07:10:36,874 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for cc9a49a5714433c50e2c0e1046710cb8 2023-07-22 07:10:36,877 INFO [StoreOpener-cc9a49a5714433c50e2c0e1046710cb8-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region cc9a49a5714433c50e2c0e1046710cb8 2023-07-22 07:10:36,879 DEBUG [StoreOpener-cc9a49a5714433c50e2c0e1046710cb8-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/data/default/Group_testFailRemoveGroup/cc9a49a5714433c50e2c0e1046710cb8/f 2023-07-22 07:10:36,879 DEBUG [StoreOpener-cc9a49a5714433c50e2c0e1046710cb8-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/data/default/Group_testFailRemoveGroup/cc9a49a5714433c50e2c0e1046710cb8/f 2023-07-22 07:10:36,880 INFO [StoreOpener-cc9a49a5714433c50e2c0e1046710cb8-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region cc9a49a5714433c50e2c0e1046710cb8 columnFamilyName f 2023-07-22 07:10:36,880 INFO [StoreOpener-cc9a49a5714433c50e2c0e1046710cb8-1] regionserver.HStore(310): Store=cc9a49a5714433c50e2c0e1046710cb8/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-22 07:10:36,881 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/data/default/Group_testFailRemoveGroup/cc9a49a5714433c50e2c0e1046710cb8 2023-07-22 07:10:36,882 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/data/default/Group_testFailRemoveGroup/cc9a49a5714433c50e2c0e1046710cb8 2023-07-22 07:10:36,890 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for cc9a49a5714433c50e2c0e1046710cb8 2023-07-22 07:10:36,895 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/data/default/Group_testFailRemoveGroup/cc9a49a5714433c50e2c0e1046710cb8/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-22 07:10:36,896 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened cc9a49a5714433c50e2c0e1046710cb8; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11161438400, jitterRate=0.039489954710006714}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-22 07:10:36,896 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for cc9a49a5714433c50e2c0e1046710cb8: 2023-07-22 07:10:36,897 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testFailRemoveGroup,,1690009836368.cc9a49a5714433c50e2c0e1046710cb8., pid=77, masterSystemTime=1690009836867 2023-07-22 07:10:36,899 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testFailRemoveGroup,,1690009836368.cc9a49a5714433c50e2c0e1046710cb8. 2023-07-22 07:10:36,899 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testFailRemoveGroup,,1690009836368.cc9a49a5714433c50e2c0e1046710cb8. 2023-07-22 07:10:36,899 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=76 updating hbase:meta row=cc9a49a5714433c50e2c0e1046710cb8, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,41787,1690009825478 2023-07-22 07:10:36,900 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testFailRemoveGroup,,1690009836368.cc9a49a5714433c50e2c0e1046710cb8.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1690009836899"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690009836899"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690009836899"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690009836899"}]},"ts":"1690009836899"} 2023-07-22 07:10:36,904 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=77, resume processing ppid=76 2023-07-22 07:10:36,904 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=77, ppid=76, state=SUCCESS; OpenRegionProcedure cc9a49a5714433c50e2c0e1046710cb8, server=jenkins-hbase4.apache.org,41787,1690009825478 in 189 msec 2023-07-22 07:10:36,906 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=76, resume processing ppid=75 2023-07-22 07:10:36,906 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=76, ppid=75, state=SUCCESS; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=cc9a49a5714433c50e2c0e1046710cb8, ASSIGN in 348 msec 2023-07-22 07:10:36,906 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=75, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=Group_testFailRemoveGroup execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-22 07:10:36,907 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testFailRemoveGroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690009836907"}]},"ts":"1690009836907"} 2023-07-22 07:10:36,908 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=Group_testFailRemoveGroup, state=ENABLED in hbase:meta 2023-07-22 07:10:36,911 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=75, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=Group_testFailRemoveGroup execute state=CREATE_TABLE_POST_OPERATION 2023-07-22 07:10:36,912 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=75, state=SUCCESS; CreateTableProcedure table=Group_testFailRemoveGroup in 543 msec 2023-07-22 07:10:36,976 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.MasterRpcServices(1230): Checking to see if procedure is done pid=75 2023-07-22 07:10:36,977 INFO [Listener at localhost/46507] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:Group_testFailRemoveGroup, procId: 75 completed 2023-07-22 07:10:36,977 DEBUG [Listener at localhost/46507] hbase.HBaseTestingUtility(3430): Waiting until all regions of table Group_testFailRemoveGroup get assigned. Timeout = 60000ms 2023-07-22 07:10:36,977 INFO [Listener at localhost/46507] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-22 07:10:36,980 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=39057] ipc.CallRunner(144): callId: 279 service: ClientService methodName: Scan size: 96 connection: 172.31.14.131:44358 deadline: 1690009896980, exception=org.apache.hadoop.hbase.exceptions.RegionMovedException: Region moved to: hostname=jenkins-hbase4.apache.org port=41787 startCode=1690009825478. As of locationSeqNum=90. 2023-07-22 07:10:37,088 DEBUG [hconnection-0x422d8bf2-shared-pool-2] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-22 07:10:37,093 INFO [RS-EventLoopGroup-4-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:59336, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-22 07:10:37,111 INFO [Listener at localhost/46507] hbase.HBaseTestingUtility(3484): All regions for table Group_testFailRemoveGroup assigned to meta. Checking AM states. 2023-07-22 07:10:37,112 INFO [Listener at localhost/46507] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-22 07:10:37,112 INFO [Listener at localhost/46507] hbase.HBaseTestingUtility(3504): All regions for table Group_testFailRemoveGroup assigned. 2023-07-22 07:10:37,114 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [Group_testFailRemoveGroup] to rsgroup bar 2023-07-22 07:10:37,117 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-22 07:10:37,117 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/bar 2023-07-22 07:10:37,117 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-22 07:10:37,118 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-22 07:10:37,120 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminServer(339): Moving region(s) for table Group_testFailRemoveGroup to RSGroup bar 2023-07-22 07:10:37,120 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminServer(345): Moving region cc9a49a5714433c50e2c0e1046710cb8 to RSGroup bar 2023-07-22 07:10:37,120 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-22 07:10:37,120 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-22 07:10:37,120 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-22 07:10:37,120 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-22 07:10:37,120 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] balancer.BaseLoadBalancer$Cluster(362): server 3 is on host 0 2023-07-22 07:10:37,120 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-22 07:10:37,121 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] procedure2.ProcedureExecutor(1029): Stored pid=78, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=cc9a49a5714433c50e2c0e1046710cb8, REOPEN/MOVE 2023-07-22 07:10:37,121 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminServer(286): Moving 1 region(s) to group bar, current retry=0 2023-07-22 07:10:37,123 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=78, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=cc9a49a5714433c50e2c0e1046710cb8, REOPEN/MOVE 2023-07-22 07:10:37,123 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=78 updating hbase:meta row=cc9a49a5714433c50e2c0e1046710cb8, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,41787,1690009825478 2023-07-22 07:10:37,124 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testFailRemoveGroup,,1690009836368.cc9a49a5714433c50e2c0e1046710cb8.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1690009837123"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690009837123"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690009837123"}]},"ts":"1690009837123"} 2023-07-22 07:10:37,125 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=79, ppid=78, state=RUNNABLE; CloseRegionProcedure cc9a49a5714433c50e2c0e1046710cb8, server=jenkins-hbase4.apache.org,41787,1690009825478}] 2023-07-22 07:10:37,278 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close cc9a49a5714433c50e2c0e1046710cb8 2023-07-22 07:10:37,280 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing cc9a49a5714433c50e2c0e1046710cb8, disabling compactions & flushes 2023-07-22 07:10:37,280 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testFailRemoveGroup,,1690009836368.cc9a49a5714433c50e2c0e1046710cb8. 2023-07-22 07:10:37,280 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testFailRemoveGroup,,1690009836368.cc9a49a5714433c50e2c0e1046710cb8. 2023-07-22 07:10:37,280 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testFailRemoveGroup,,1690009836368.cc9a49a5714433c50e2c0e1046710cb8. after waiting 0 ms 2023-07-22 07:10:37,280 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testFailRemoveGroup,,1690009836368.cc9a49a5714433c50e2c0e1046710cb8. 2023-07-22 07:10:37,284 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/data/default/Group_testFailRemoveGroup/cc9a49a5714433c50e2c0e1046710cb8/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-22 07:10:37,285 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testFailRemoveGroup,,1690009836368.cc9a49a5714433c50e2c0e1046710cb8. 2023-07-22 07:10:37,285 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for cc9a49a5714433c50e2c0e1046710cb8: 2023-07-22 07:10:37,285 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding cc9a49a5714433c50e2c0e1046710cb8 move to jenkins-hbase4.apache.org,39057,1690009825637 record at close sequenceid=2 2023-07-22 07:10:37,286 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed cc9a49a5714433c50e2c0e1046710cb8 2023-07-22 07:10:37,287 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=78 updating hbase:meta row=cc9a49a5714433c50e2c0e1046710cb8, regionState=CLOSED 2023-07-22 07:10:37,287 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testFailRemoveGroup,,1690009836368.cc9a49a5714433c50e2c0e1046710cb8.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1690009837287"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690009837287"}]},"ts":"1690009837287"} 2023-07-22 07:10:37,290 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=79, resume processing ppid=78 2023-07-22 07:10:37,290 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=79, ppid=78, state=SUCCESS; CloseRegionProcedure cc9a49a5714433c50e2c0e1046710cb8, server=jenkins-hbase4.apache.org,41787,1690009825478 in 164 msec 2023-07-22 07:10:37,291 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=78, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=cc9a49a5714433c50e2c0e1046710cb8, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,39057,1690009825637; forceNewPlan=false, retain=false 2023-07-22 07:10:37,441 INFO [jenkins-hbase4:37061] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-22 07:10:37,442 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=78 updating hbase:meta row=cc9a49a5714433c50e2c0e1046710cb8, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,39057,1690009825637 2023-07-22 07:10:37,442 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testFailRemoveGroup,,1690009836368.cc9a49a5714433c50e2c0e1046710cb8.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1690009837442"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690009837442"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690009837442"}]},"ts":"1690009837442"} 2023-07-22 07:10:37,445 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=80, ppid=78, state=RUNNABLE; OpenRegionProcedure cc9a49a5714433c50e2c0e1046710cb8, server=jenkins-hbase4.apache.org,39057,1690009825637}] 2023-07-22 07:10:37,605 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testFailRemoveGroup,,1690009836368.cc9a49a5714433c50e2c0e1046710cb8. 2023-07-22 07:10:37,606 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => cc9a49a5714433c50e2c0e1046710cb8, NAME => 'Group_testFailRemoveGroup,,1690009836368.cc9a49a5714433c50e2c0e1046710cb8.', STARTKEY => '', ENDKEY => ''} 2023-07-22 07:10:37,606 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testFailRemoveGroup cc9a49a5714433c50e2c0e1046710cb8 2023-07-22 07:10:37,606 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testFailRemoveGroup,,1690009836368.cc9a49a5714433c50e2c0e1046710cb8.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-22 07:10:37,606 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for cc9a49a5714433c50e2c0e1046710cb8 2023-07-22 07:10:37,606 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for cc9a49a5714433c50e2c0e1046710cb8 2023-07-22 07:10:37,608 INFO [StoreOpener-cc9a49a5714433c50e2c0e1046710cb8-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region cc9a49a5714433c50e2c0e1046710cb8 2023-07-22 07:10:37,609 DEBUG [StoreOpener-cc9a49a5714433c50e2c0e1046710cb8-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/data/default/Group_testFailRemoveGroup/cc9a49a5714433c50e2c0e1046710cb8/f 2023-07-22 07:10:37,609 DEBUG [StoreOpener-cc9a49a5714433c50e2c0e1046710cb8-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/data/default/Group_testFailRemoveGroup/cc9a49a5714433c50e2c0e1046710cb8/f 2023-07-22 07:10:37,610 INFO [StoreOpener-cc9a49a5714433c50e2c0e1046710cb8-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region cc9a49a5714433c50e2c0e1046710cb8 columnFamilyName f 2023-07-22 07:10:37,610 INFO [StoreOpener-cc9a49a5714433c50e2c0e1046710cb8-1] regionserver.HStore(310): Store=cc9a49a5714433c50e2c0e1046710cb8/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-22 07:10:37,611 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/data/default/Group_testFailRemoveGroup/cc9a49a5714433c50e2c0e1046710cb8 2023-07-22 07:10:37,613 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/data/default/Group_testFailRemoveGroup/cc9a49a5714433c50e2c0e1046710cb8 2023-07-22 07:10:37,616 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for cc9a49a5714433c50e2c0e1046710cb8 2023-07-22 07:10:37,617 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened cc9a49a5714433c50e2c0e1046710cb8; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9779257600, jitterRate=-0.08923566341400146}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-22 07:10:37,617 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for cc9a49a5714433c50e2c0e1046710cb8: 2023-07-22 07:10:37,618 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testFailRemoveGroup,,1690009836368.cc9a49a5714433c50e2c0e1046710cb8., pid=80, masterSystemTime=1690009837597 2023-07-22 07:10:37,620 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testFailRemoveGroup,,1690009836368.cc9a49a5714433c50e2c0e1046710cb8. 2023-07-22 07:10:37,620 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testFailRemoveGroup,,1690009836368.cc9a49a5714433c50e2c0e1046710cb8. 2023-07-22 07:10:37,621 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=78 updating hbase:meta row=cc9a49a5714433c50e2c0e1046710cb8, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,39057,1690009825637 2023-07-22 07:10:37,621 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testFailRemoveGroup,,1690009836368.cc9a49a5714433c50e2c0e1046710cb8.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1690009837621"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690009837621"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690009837621"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690009837621"}]},"ts":"1690009837621"} 2023-07-22 07:10:37,626 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=80, resume processing ppid=78 2023-07-22 07:10:37,626 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=80, ppid=78, state=SUCCESS; OpenRegionProcedure cc9a49a5714433c50e2c0e1046710cb8, server=jenkins-hbase4.apache.org,39057,1690009825637 in 180 msec 2023-07-22 07:10:37,627 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=78, state=SUCCESS; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=cc9a49a5714433c50e2c0e1046710cb8, REOPEN/MOVE in 506 msec 2023-07-22 07:10:38,123 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] procedure.ProcedureSyncWait(216): waitFor pid=78 2023-07-22 07:10:38,123 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminServer(369): All regions from table(s) [Group_testFailRemoveGroup] moved to target group bar. 2023-07-22 07:10:38,123 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-22 07:10:38,127 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-22 07:10:38,127 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-22 07:10:38,132 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=bar 2023-07-22 07:10:38,132 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-22 07:10:38,133 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup bar 2023-07-22 07:10:38,133 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup bar has 1 tables; you must remove these tables from the rsgroup before the rsgroup can be removed. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.removeRSGroup(RSGroupAdminServer.java:490) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.removeRSGroup(RSGroupAdminEndpoint.java:278) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16208) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-22 07:10:38,133 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] ipc.CallRunner(144): callId: 289 service: MasterService methodName: ExecMasterService size: 85 connection: 172.31.14.131:38908 deadline: 1690011038133, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup bar has 1 tables; you must remove these tables from the rsgroup before the rsgroup can be removed. 2023-07-22 07:10:38,135 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:39057, jenkins-hbase4.apache.org:34133, jenkins-hbase4.apache.org:33357] to rsgroup default 2023-07-22 07:10:38,135 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Cannot leave a RSGroup bar that contains tables without servers to host them. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:428) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-22 07:10:38,135 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] ipc.CallRunner(144): callId: 291 service: MasterService methodName: ExecMasterService size: 188 connection: 172.31.14.131:38908 deadline: 1690011038135, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Cannot leave a RSGroup bar that contains tables without servers to host them. 2023-07-22 07:10:38,139 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [Group_testFailRemoveGroup] to rsgroup default 2023-07-22 07:10:38,142 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-22 07:10:38,143 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/bar 2023-07-22 07:10:38,143 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-22 07:10:38,144 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-22 07:10:38,146 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminServer(339): Moving region(s) for table Group_testFailRemoveGroup to RSGroup default 2023-07-22 07:10:38,146 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminServer(345): Moving region cc9a49a5714433c50e2c0e1046710cb8 to RSGroup default 2023-07-22 07:10:38,147 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] procedure2.ProcedureExecutor(1029): Stored pid=81, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=cc9a49a5714433c50e2c0e1046710cb8, REOPEN/MOVE 2023-07-22 07:10:38,147 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminServer(286): Moving 1 region(s) to group default, current retry=0 2023-07-22 07:10:38,149 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=81, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=cc9a49a5714433c50e2c0e1046710cb8, REOPEN/MOVE 2023-07-22 07:10:38,150 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=81 updating hbase:meta row=cc9a49a5714433c50e2c0e1046710cb8, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,39057,1690009825637 2023-07-22 07:10:38,150 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testFailRemoveGroup,,1690009836368.cc9a49a5714433c50e2c0e1046710cb8.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1690009838150"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690009838150"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690009838150"}]},"ts":"1690009838150"} 2023-07-22 07:10:38,152 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=82, ppid=81, state=RUNNABLE; CloseRegionProcedure cc9a49a5714433c50e2c0e1046710cb8, server=jenkins-hbase4.apache.org,39057,1690009825637}] 2023-07-22 07:10:38,309 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close cc9a49a5714433c50e2c0e1046710cb8 2023-07-22 07:10:38,310 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing cc9a49a5714433c50e2c0e1046710cb8, disabling compactions & flushes 2023-07-22 07:10:38,310 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testFailRemoveGroup,,1690009836368.cc9a49a5714433c50e2c0e1046710cb8. 2023-07-22 07:10:38,310 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testFailRemoveGroup,,1690009836368.cc9a49a5714433c50e2c0e1046710cb8. 2023-07-22 07:10:38,311 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testFailRemoveGroup,,1690009836368.cc9a49a5714433c50e2c0e1046710cb8. after waiting 0 ms 2023-07-22 07:10:38,311 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testFailRemoveGroup,,1690009836368.cc9a49a5714433c50e2c0e1046710cb8. 2023-07-22 07:10:38,316 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/data/default/Group_testFailRemoveGroup/cc9a49a5714433c50e2c0e1046710cb8/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-22 07:10:38,316 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testFailRemoveGroup,,1690009836368.cc9a49a5714433c50e2c0e1046710cb8. 2023-07-22 07:10:38,316 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for cc9a49a5714433c50e2c0e1046710cb8: 2023-07-22 07:10:38,316 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding cc9a49a5714433c50e2c0e1046710cb8 move to jenkins-hbase4.apache.org,41787,1690009825478 record at close sequenceid=5 2023-07-22 07:10:38,320 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed cc9a49a5714433c50e2c0e1046710cb8 2023-07-22 07:10:38,321 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=81 updating hbase:meta row=cc9a49a5714433c50e2c0e1046710cb8, regionState=CLOSED 2023-07-22 07:10:38,321 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testFailRemoveGroup,,1690009836368.cc9a49a5714433c50e2c0e1046710cb8.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1690009838321"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690009838321"}]},"ts":"1690009838321"} 2023-07-22 07:10:38,326 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=82, resume processing ppid=81 2023-07-22 07:10:38,326 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=82, ppid=81, state=SUCCESS; CloseRegionProcedure cc9a49a5714433c50e2c0e1046710cb8, server=jenkins-hbase4.apache.org,39057,1690009825637 in 171 msec 2023-07-22 07:10:38,327 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=81, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=cc9a49a5714433c50e2c0e1046710cb8, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,41787,1690009825478; forceNewPlan=false, retain=false 2023-07-22 07:10:38,477 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=81 updating hbase:meta row=cc9a49a5714433c50e2c0e1046710cb8, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,41787,1690009825478 2023-07-22 07:10:38,477 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testFailRemoveGroup,,1690009836368.cc9a49a5714433c50e2c0e1046710cb8.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1690009838477"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690009838477"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690009838477"}]},"ts":"1690009838477"} 2023-07-22 07:10:38,480 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=83, ppid=81, state=RUNNABLE; OpenRegionProcedure cc9a49a5714433c50e2c0e1046710cb8, server=jenkins-hbase4.apache.org,41787,1690009825478}] 2023-07-22 07:10:38,532 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-hbase.properties,hadoop-metrics2.properties 2023-07-22 07:10:38,636 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testFailRemoveGroup,,1690009836368.cc9a49a5714433c50e2c0e1046710cb8. 2023-07-22 07:10:38,637 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => cc9a49a5714433c50e2c0e1046710cb8, NAME => 'Group_testFailRemoveGroup,,1690009836368.cc9a49a5714433c50e2c0e1046710cb8.', STARTKEY => '', ENDKEY => ''} 2023-07-22 07:10:38,637 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testFailRemoveGroup cc9a49a5714433c50e2c0e1046710cb8 2023-07-22 07:10:38,637 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testFailRemoveGroup,,1690009836368.cc9a49a5714433c50e2c0e1046710cb8.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-22 07:10:38,637 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for cc9a49a5714433c50e2c0e1046710cb8 2023-07-22 07:10:38,637 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for cc9a49a5714433c50e2c0e1046710cb8 2023-07-22 07:10:38,642 INFO [StoreOpener-cc9a49a5714433c50e2c0e1046710cb8-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region cc9a49a5714433c50e2c0e1046710cb8 2023-07-22 07:10:38,643 DEBUG [StoreOpener-cc9a49a5714433c50e2c0e1046710cb8-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/data/default/Group_testFailRemoveGroup/cc9a49a5714433c50e2c0e1046710cb8/f 2023-07-22 07:10:38,643 DEBUG [StoreOpener-cc9a49a5714433c50e2c0e1046710cb8-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/data/default/Group_testFailRemoveGroup/cc9a49a5714433c50e2c0e1046710cb8/f 2023-07-22 07:10:38,644 INFO [StoreOpener-cc9a49a5714433c50e2c0e1046710cb8-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region cc9a49a5714433c50e2c0e1046710cb8 columnFamilyName f 2023-07-22 07:10:38,647 INFO [StoreOpener-cc9a49a5714433c50e2c0e1046710cb8-1] regionserver.HStore(310): Store=cc9a49a5714433c50e2c0e1046710cb8/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-22 07:10:38,648 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/data/default/Group_testFailRemoveGroup/cc9a49a5714433c50e2c0e1046710cb8 2023-07-22 07:10:38,649 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/data/default/Group_testFailRemoveGroup/cc9a49a5714433c50e2c0e1046710cb8 2023-07-22 07:10:38,653 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for cc9a49a5714433c50e2c0e1046710cb8 2023-07-22 07:10:38,654 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened cc9a49a5714433c50e2c0e1046710cb8; next sequenceid=8; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10965789600, jitterRate=0.02126874029636383}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-22 07:10:38,654 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for cc9a49a5714433c50e2c0e1046710cb8: 2023-07-22 07:10:38,656 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testFailRemoveGroup,,1690009836368.cc9a49a5714433c50e2c0e1046710cb8., pid=83, masterSystemTime=1690009838632 2023-07-22 07:10:38,658 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testFailRemoveGroup,,1690009836368.cc9a49a5714433c50e2c0e1046710cb8. 2023-07-22 07:10:38,658 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testFailRemoveGroup,,1690009836368.cc9a49a5714433c50e2c0e1046710cb8. 2023-07-22 07:10:38,659 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=81 updating hbase:meta row=cc9a49a5714433c50e2c0e1046710cb8, regionState=OPEN, openSeqNum=8, regionLocation=jenkins-hbase4.apache.org,41787,1690009825478 2023-07-22 07:10:38,659 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testFailRemoveGroup,,1690009836368.cc9a49a5714433c50e2c0e1046710cb8.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1690009838659"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690009838659"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690009838659"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690009838659"}]},"ts":"1690009838659"} 2023-07-22 07:10:38,667 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=83, resume processing ppid=81 2023-07-22 07:10:38,667 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=83, ppid=81, state=SUCCESS; OpenRegionProcedure cc9a49a5714433c50e2c0e1046710cb8, server=jenkins-hbase4.apache.org,41787,1690009825478 in 181 msec 2023-07-22 07:10:38,670 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=81, state=SUCCESS; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=cc9a49a5714433c50e2c0e1046710cb8, REOPEN/MOVE in 521 msec 2023-07-22 07:10:39,149 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] procedure.ProcedureSyncWait(216): waitFor pid=81 2023-07-22 07:10:39,149 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminServer(369): All regions from table(s) [Group_testFailRemoveGroup] moved to target group default. 2023-07-22 07:10:39,149 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-22 07:10:39,153 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-22 07:10:39,153 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-22 07:10:39,157 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup bar 2023-07-22 07:10:39,158 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup bar has 3 servers; you must remove these servers from the RSGroup beforethe RSGroup can be removed. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.removeRSGroup(RSGroupAdminServer.java:496) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.removeRSGroup(RSGroupAdminEndpoint.java:278) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16208) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-22 07:10:39,158 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] ipc.CallRunner(144): callId: 298 service: MasterService methodName: ExecMasterService size: 85 connection: 172.31.14.131:38908 deadline: 1690011039157, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup bar has 3 servers; you must remove these servers from the RSGroup beforethe RSGroup can be removed. 2023-07-22 07:10:39,159 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:39057, jenkins-hbase4.apache.org:34133, jenkins-hbase4.apache.org:33357] to rsgroup default 2023-07-22 07:10:39,162 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-22 07:10:39,162 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/bar 2023-07-22 07:10:39,163 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-22 07:10:39,163 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-22 07:10:39,165 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group bar, current retry=0 2023-07-22 07:10:39,165 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,33357,1690009829125, jenkins-hbase4.apache.org,34133,1690009825283, jenkins-hbase4.apache.org,39057,1690009825637] are moved back to bar 2023-07-22 07:10:39,165 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminServer(438): Move servers done: bar => default 2023-07-22 07:10:39,165 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-22 07:10:39,169 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-22 07:10:39,169 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-22 07:10:39,172 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup bar 2023-07-22 07:10:39,176 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-22 07:10:39,176 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-22 07:10:39,177 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-22 07:10:39,178 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-22 07:10:39,181 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-22 07:10:39,181 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-22 07:10:39,183 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-22 07:10:39,184 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-22 07:10:39,186 INFO [Listener at localhost/46507] client.HBaseAdmin$15(890): Started disable of Group_testFailRemoveGroup 2023-07-22 07:10:39,186 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable Group_testFailRemoveGroup 2023-07-22 07:10:39,188 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] procedure2.ProcedureExecutor(1029): Stored pid=84, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=Group_testFailRemoveGroup 2023-07-22 07:10:39,192 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.MasterRpcServices(1230): Checking to see if procedure is done pid=84 2023-07-22 07:10:39,193 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testFailRemoveGroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690009839193"}]},"ts":"1690009839193"} 2023-07-22 07:10:39,194 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=Group_testFailRemoveGroup, state=DISABLING in hbase:meta 2023-07-22 07:10:39,198 INFO [PEWorker-4] procedure.DisableTableProcedure(293): Set Group_testFailRemoveGroup to state=DISABLING 2023-07-22 07:10:39,201 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=85, ppid=84, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=cc9a49a5714433c50e2c0e1046710cb8, UNASSIGN}] 2023-07-22 07:10:39,203 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=85, ppid=84, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=cc9a49a5714433c50e2c0e1046710cb8, UNASSIGN 2023-07-22 07:10:39,204 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=85 updating hbase:meta row=cc9a49a5714433c50e2c0e1046710cb8, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,41787,1690009825478 2023-07-22 07:10:39,204 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testFailRemoveGroup,,1690009836368.cc9a49a5714433c50e2c0e1046710cb8.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1690009839203"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690009839203"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690009839203"}]},"ts":"1690009839203"} 2023-07-22 07:10:39,205 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=86, ppid=85, state=RUNNABLE; CloseRegionProcedure cc9a49a5714433c50e2c0e1046710cb8, server=jenkins-hbase4.apache.org,41787,1690009825478}] 2023-07-22 07:10:39,293 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.MasterRpcServices(1230): Checking to see if procedure is done pid=84 2023-07-22 07:10:39,358 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close cc9a49a5714433c50e2c0e1046710cb8 2023-07-22 07:10:39,359 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing cc9a49a5714433c50e2c0e1046710cb8, disabling compactions & flushes 2023-07-22 07:10:39,359 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testFailRemoveGroup,,1690009836368.cc9a49a5714433c50e2c0e1046710cb8. 2023-07-22 07:10:39,359 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testFailRemoveGroup,,1690009836368.cc9a49a5714433c50e2c0e1046710cb8. 2023-07-22 07:10:39,359 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testFailRemoveGroup,,1690009836368.cc9a49a5714433c50e2c0e1046710cb8. after waiting 0 ms 2023-07-22 07:10:39,359 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testFailRemoveGroup,,1690009836368.cc9a49a5714433c50e2c0e1046710cb8. 2023-07-22 07:10:39,364 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/data/default/Group_testFailRemoveGroup/cc9a49a5714433c50e2c0e1046710cb8/recovered.edits/10.seqid, newMaxSeqId=10, maxSeqId=7 2023-07-22 07:10:39,365 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testFailRemoveGroup,,1690009836368.cc9a49a5714433c50e2c0e1046710cb8. 2023-07-22 07:10:39,365 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for cc9a49a5714433c50e2c0e1046710cb8: 2023-07-22 07:10:39,367 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed cc9a49a5714433c50e2c0e1046710cb8 2023-07-22 07:10:39,367 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=85 updating hbase:meta row=cc9a49a5714433c50e2c0e1046710cb8, regionState=CLOSED 2023-07-22 07:10:39,368 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testFailRemoveGroup,,1690009836368.cc9a49a5714433c50e2c0e1046710cb8.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1690009839367"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690009839367"}]},"ts":"1690009839367"} 2023-07-22 07:10:39,380 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=86, resume processing ppid=85 2023-07-22 07:10:39,380 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=86, ppid=85, state=SUCCESS; CloseRegionProcedure cc9a49a5714433c50e2c0e1046710cb8, server=jenkins-hbase4.apache.org,41787,1690009825478 in 171 msec 2023-07-22 07:10:39,382 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=85, resume processing ppid=84 2023-07-22 07:10:39,382 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=85, ppid=84, state=SUCCESS; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=cc9a49a5714433c50e2c0e1046710cb8, UNASSIGN in 179 msec 2023-07-22 07:10:39,383 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testFailRemoveGroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690009839383"}]},"ts":"1690009839383"} 2023-07-22 07:10:39,385 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=Group_testFailRemoveGroup, state=DISABLED in hbase:meta 2023-07-22 07:10:39,392 INFO [PEWorker-4] procedure.DisableTableProcedure(305): Set Group_testFailRemoveGroup to state=DISABLED 2023-07-22 07:10:39,395 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=84, state=SUCCESS; DisableTableProcedure table=Group_testFailRemoveGroup in 207 msec 2023-07-22 07:10:39,495 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.MasterRpcServices(1230): Checking to see if procedure is done pid=84 2023-07-22 07:10:39,495 INFO [Listener at localhost/46507] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:Group_testFailRemoveGroup, procId: 84 completed 2023-07-22 07:10:39,496 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.HMaster$5(2228): Client=jenkins//172.31.14.131 delete Group_testFailRemoveGroup 2023-07-22 07:10:39,500 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] procedure2.ProcedureExecutor(1029): Stored pid=87, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=Group_testFailRemoveGroup 2023-07-22 07:10:39,505 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=87, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=Group_testFailRemoveGroup 2023-07-22 07:10:39,505 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 'Group_testFailRemoveGroup' from rsgroup 'default' 2023-07-22 07:10:39,508 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=87, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=Group_testFailRemoveGroup 2023-07-22 07:10:39,508 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-22 07:10:39,509 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-22 07:10:39,509 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-22 07:10:39,514 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.MasterRpcServices(1230): Checking to see if procedure is done pid=87 2023-07-22 07:10:39,514 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/.tmp/data/default/Group_testFailRemoveGroup/cc9a49a5714433c50e2c0e1046710cb8 2023-07-22 07:10:39,519 DEBUG [HFileArchiver-2] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/.tmp/data/default/Group_testFailRemoveGroup/cc9a49a5714433c50e2c0e1046710cb8/f, FileablePath, hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/.tmp/data/default/Group_testFailRemoveGroup/cc9a49a5714433c50e2c0e1046710cb8/recovered.edits] 2023-07-22 07:10:39,531 DEBUG [HFileArchiver-2] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/.tmp/data/default/Group_testFailRemoveGroup/cc9a49a5714433c50e2c0e1046710cb8/recovered.edits/10.seqid to hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/archive/data/default/Group_testFailRemoveGroup/cc9a49a5714433c50e2c0e1046710cb8/recovered.edits/10.seqid 2023-07-22 07:10:39,531 DEBUG [HFileArchiver-2] backup.HFileArchiver(596): Deleted hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/.tmp/data/default/Group_testFailRemoveGroup/cc9a49a5714433c50e2c0e1046710cb8 2023-07-22 07:10:39,532 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(328): Archived Group_testFailRemoveGroup regions 2023-07-22 07:10:39,535 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=87, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=Group_testFailRemoveGroup 2023-07-22 07:10:39,550 WARN [PEWorker-1] procedure.DeleteTableProcedure(384): Deleting some vestigial 1 rows of Group_testFailRemoveGroup from hbase:meta 2023-07-22 07:10:39,555 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(421): Removing 'Group_testFailRemoveGroup' descriptor. 2023-07-22 07:10:39,556 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=87, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=Group_testFailRemoveGroup 2023-07-22 07:10:39,557 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(411): Removing 'Group_testFailRemoveGroup' from region states. 2023-07-22 07:10:39,557 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testFailRemoveGroup,,1690009836368.cc9a49a5714433c50e2c0e1046710cb8.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1690009839557"}]},"ts":"9223372036854775807"} 2023-07-22 07:10:39,560 INFO [PEWorker-1] hbase.MetaTableAccessor(1788): Deleted 1 regions from META 2023-07-22 07:10:39,560 DEBUG [PEWorker-1] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => cc9a49a5714433c50e2c0e1046710cb8, NAME => 'Group_testFailRemoveGroup,,1690009836368.cc9a49a5714433c50e2c0e1046710cb8.', STARTKEY => '', ENDKEY => ''}] 2023-07-22 07:10:39,560 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(415): Marking 'Group_testFailRemoveGroup' as deleted. 2023-07-22 07:10:39,560 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testFailRemoveGroup","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1690009839560"}]},"ts":"9223372036854775807"} 2023-07-22 07:10:39,564 INFO [PEWorker-1] hbase.MetaTableAccessor(1658): Deleted table Group_testFailRemoveGroup state from META 2023-07-22 07:10:39,566 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(130): Finished pid=87, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=Group_testFailRemoveGroup 2023-07-22 07:10:39,569 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=87, state=SUCCESS; DeleteTableProcedure table=Group_testFailRemoveGroup in 70 msec 2023-07-22 07:10:39,615 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.MasterRpcServices(1230): Checking to see if procedure is done pid=87 2023-07-22 07:10:39,615 INFO [Listener at localhost/46507] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: default:Group_testFailRemoveGroup, procId: 87 completed 2023-07-22 07:10:39,619 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-22 07:10:39,619 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-22 07:10:39,620 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-22 07:10:39,620 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-22 07:10:39,620 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-22 07:10:39,621 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-22 07:10:39,621 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-22 07:10:39,621 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-22 07:10:39,625 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-22 07:10:39,625 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-22 07:10:39,631 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-22 07:10:39,634 INFO [Listener at localhost/46507] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-22 07:10:39,635 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-22 07:10:39,637 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-22 07:10:39,637 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-22 07:10:39,639 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-22 07:10:39,640 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-22 07:10:39,643 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-22 07:10:39,643 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-22 07:10:39,645 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:37061] to rsgroup master 2023-07-22 07:10:39,645 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37061 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-22 07:10:39,645 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] ipc.CallRunner(144): callId: 346 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:38908 deadline: 1690011039645, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37061 is either offline or it does not exist. 2023-07-22 07:10:39,646 WARN [Listener at localhost/46507] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37061 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37061 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-22 07:10:39,647 INFO [Listener at localhost/46507] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-22 07:10:39,648 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-22 07:10:39,648 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-22 07:10:39,648 INFO [Listener at localhost/46507] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:33357, jenkins-hbase4.apache.org:34133, jenkins-hbase4.apache.org:39057, jenkins-hbase4.apache.org:41787], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-22 07:10:39,649 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-22 07:10:39,649 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-22 07:10:39,690 INFO [Listener at localhost/46507] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testFailRemoveGroup Thread=512 (was 497) Potentially hanging thread: hconnection-0x422d8bf2-shared-pool-2 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x50c4626a-shared-pool-12 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_12167090_17 at /127.0.0.1:60372 [Receiving block BP-1233006246-172.31.14.131-1690009819581:blk_1073741856_1032] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1563634625_17 at /127.0.0.1:60418 [Waiting for operation #5] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1233006246-172.31.14.131-1690009819581:blk_1073741856_1032, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_12167090_17 at /127.0.0.1:34374 [Waiting for operation #3] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666-prefix:jenkins-hbase4.apache.org,41787,1690009825478.meta sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Async disk worker #0 for volume /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1e6bc595-35d7-277c-1036-cd06713ba4c7/cluster_00bb9b6e-43e0-f121-1ccb-023e0a721165/dfs/data/data1/current sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x50c4626a-shared-pool-15 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x50c4626a-shared-pool-13 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x50c4626a-shared-pool-17 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x50c4626a-shared-pool-14 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Async disk worker #0 for volume /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1e6bc595-35d7-277c-1036-cd06713ba4c7/cluster_00bb9b6e-43e0-f121-1ccb-023e0a721165/dfs/data/data2/current sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_CLOSE_META-regionserver/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x2e79eb29-shared-pool-12 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_12167090_17 at /127.0.0.1:52296 [Receiving block BP-1233006246-172.31.14.131-1690009819581:blk_1073741856_1032] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1076689109_17 at /127.0.0.1:34334 [Waiting for operation #6] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_12167090_17 at /127.0.0.1:34388 [Receiving block BP-1233006246-172.31.14.131-1690009819581:blk_1073741856_1032] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x2e79eb29-shared-pool-10 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1233006246-172.31.14.131-1690009819581:blk_1073741856_1032, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x50c4626a-shared-pool-16 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x2e79eb29-shared-pool-9 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_12167090_17 at /127.0.0.1:52286 [Waiting for operation #7] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-9 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x2e79eb29-shared-pool-11 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1233006246-172.31.14.131-1690009819581:blk_1073741856_1032, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1076689109_17 at /127.0.0.1:60362 [Waiting for operation #4] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1076689109_17 at /127.0.0.1:60292 [Waiting for operation #8] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=792 (was 772) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=404 (was 404), ProcessCount=180 (was 180), AvailableMemoryMB=6828 (was 7105) 2023-07-22 07:10:39,690 WARN [Listener at localhost/46507] hbase.ResourceChecker(130): Thread=512 is superior to 500 2023-07-22 07:10:39,745 INFO [Listener at localhost/46507] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testMultiTableMove Thread=512, OpenFileDescriptor=794, MaxFileDescriptor=60000, SystemLoadAverage=404, ProcessCount=180, AvailableMemoryMB=6825 2023-07-22 07:10:39,745 WARN [Listener at localhost/46507] hbase.ResourceChecker(130): Thread=512 is superior to 500 2023-07-22 07:10:39,745 INFO [Listener at localhost/46507] rsgroup.TestRSGroupsBase(132): testMultiTableMove 2023-07-22 07:10:39,759 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-22 07:10:39,759 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-22 07:10:39,761 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-22 07:10:39,761 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-22 07:10:39,761 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-22 07:10:39,763 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-22 07:10:39,763 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-22 07:10:39,764 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-22 07:10:39,769 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-22 07:10:39,770 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-22 07:10:39,772 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-22 07:10:39,775 INFO [Listener at localhost/46507] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-22 07:10:39,776 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-22 07:10:39,778 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-22 07:10:39,779 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-22 07:10:39,782 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-22 07:10:39,784 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-22 07:10:39,797 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-22 07:10:39,797 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-22 07:10:39,800 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:37061] to rsgroup master 2023-07-22 07:10:39,800 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37061 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-22 07:10:39,800 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] ipc.CallRunner(144): callId: 374 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:38908 deadline: 1690011039800, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37061 is either offline or it does not exist. 2023-07-22 07:10:39,801 WARN [Listener at localhost/46507] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37061 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37061 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-22 07:10:39,812 INFO [Listener at localhost/46507] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-22 07:10:39,813 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-22 07:10:39,814 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-22 07:10:39,814 INFO [Listener at localhost/46507] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:33357, jenkins-hbase4.apache.org:34133, jenkins-hbase4.apache.org:39057, jenkins-hbase4.apache.org:41787], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-22 07:10:39,815 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-22 07:10:39,815 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-22 07:10:39,816 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-22 07:10:39,816 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-22 07:10:39,817 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup Group_testMultiTableMove_1841899830 2023-07-22 07:10:39,820 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-22 07:10:39,821 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_1841899830 2023-07-22 07:10:39,823 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-22 07:10:39,823 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-22 07:10:39,825 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-22 07:10:39,828 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-22 07:10:39,828 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-22 07:10:39,831 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:33357] to rsgroup Group_testMultiTableMove_1841899830 2023-07-22 07:10:39,834 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-22 07:10:39,835 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_1841899830 2023-07-22 07:10:39,835 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-22 07:10:39,836 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-22 07:10:39,838 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group default, current retry=0 2023-07-22 07:10:39,838 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,33357,1690009829125] are moved back to default 2023-07-22 07:10:39,838 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminServer(438): Move servers done: default => Group_testMultiTableMove_1841899830 2023-07-22 07:10:39,838 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-22 07:10:39,841 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-22 07:10:39,841 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-22 07:10:39,845 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=Group_testMultiTableMove_1841899830 2023-07-22 07:10:39,845 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-22 07:10:39,847 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'GrouptestMultiTableMoveA', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-22 07:10:39,848 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] procedure2.ProcedureExecutor(1029): Stored pid=88, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=GrouptestMultiTableMoveA 2023-07-22 07:10:39,851 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=88, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveA execute state=CREATE_TABLE_PRE_OPERATION 2023-07-22 07:10:39,851 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "GrouptestMultiTableMoveA" procId is: 88 2023-07-22 07:10:39,853 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.MasterRpcServices(1230): Checking to see if procedure is done pid=88 2023-07-22 07:10:39,854 DEBUG [PEWorker-3] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-22 07:10:39,854 DEBUG [PEWorker-3] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_1841899830 2023-07-22 07:10:39,855 DEBUG [PEWorker-3] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-22 07:10:39,855 DEBUG [PEWorker-3] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-22 07:10:39,860 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=88, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveA execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-22 07:10:39,863 DEBUG [HFileArchiver-6] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/.tmp/data/default/GrouptestMultiTableMoveA/591d20eb46a0fc5360008eff87b9f32b 2023-07-22 07:10:39,863 DEBUG [HFileArchiver-6] backup.HFileArchiver(153): Directory hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/.tmp/data/default/GrouptestMultiTableMoveA/591d20eb46a0fc5360008eff87b9f32b empty. 2023-07-22 07:10:39,864 DEBUG [HFileArchiver-6] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/.tmp/data/default/GrouptestMultiTableMoveA/591d20eb46a0fc5360008eff87b9f32b 2023-07-22 07:10:39,864 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived GrouptestMultiTableMoveA regions 2023-07-22 07:10:39,909 DEBUG [PEWorker-3] util.FSTableDescriptors(570): Wrote into hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/.tmp/data/default/GrouptestMultiTableMoveA/.tabledesc/.tableinfo.0000000001 2023-07-22 07:10:39,919 INFO [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(7675): creating {ENCODED => 591d20eb46a0fc5360008eff87b9f32b, NAME => 'GrouptestMultiTableMoveA,,1690009839847.591d20eb46a0fc5360008eff87b9f32b.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='GrouptestMultiTableMoveA', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/.tmp 2023-07-22 07:10:39,956 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.MasterRpcServices(1230): Checking to see if procedure is done pid=88 2023-07-22 07:10:39,967 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(866): Instantiated GrouptestMultiTableMoveA,,1690009839847.591d20eb46a0fc5360008eff87b9f32b.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-22 07:10:39,967 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(1604): Closing 591d20eb46a0fc5360008eff87b9f32b, disabling compactions & flushes 2023-07-22 07:10:39,967 INFO [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(1626): Closing region GrouptestMultiTableMoveA,,1690009839847.591d20eb46a0fc5360008eff87b9f32b. 2023-07-22 07:10:39,967 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on GrouptestMultiTableMoveA,,1690009839847.591d20eb46a0fc5360008eff87b9f32b. 2023-07-22 07:10:39,967 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(1714): Acquired close lock on GrouptestMultiTableMoveA,,1690009839847.591d20eb46a0fc5360008eff87b9f32b. after waiting 0 ms 2023-07-22 07:10:39,967 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(1724): Updates disabled for region GrouptestMultiTableMoveA,,1690009839847.591d20eb46a0fc5360008eff87b9f32b. 2023-07-22 07:10:39,967 INFO [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(1838): Closed GrouptestMultiTableMoveA,,1690009839847.591d20eb46a0fc5360008eff87b9f32b. 2023-07-22 07:10:39,967 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(1558): Region close journal for 591d20eb46a0fc5360008eff87b9f32b: 2023-07-22 07:10:39,971 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=88, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveA execute state=CREATE_TABLE_ADD_TO_META 2023-07-22 07:10:39,972 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"GrouptestMultiTableMoveA,,1690009839847.591d20eb46a0fc5360008eff87b9f32b.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1690009839972"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690009839972"}]},"ts":"1690009839972"} 2023-07-22 07:10:39,974 INFO [PEWorker-3] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-22 07:10:39,979 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=88, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveA execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-22 07:10:39,980 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveA","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690009839979"}]},"ts":"1690009839979"} 2023-07-22 07:10:39,982 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveA, state=ENABLING in hbase:meta 2023-07-22 07:10:39,987 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-22 07:10:39,987 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-22 07:10:39,987 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-22 07:10:39,987 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-22 07:10:39,987 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-22 07:10:39,987 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=89, ppid=88, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=591d20eb46a0fc5360008eff87b9f32b, ASSIGN}] 2023-07-22 07:10:39,991 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=89, ppid=88, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=591d20eb46a0fc5360008eff87b9f32b, ASSIGN 2023-07-22 07:10:39,994 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=89, ppid=88, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=591d20eb46a0fc5360008eff87b9f32b, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,34133,1690009825283; forceNewPlan=false, retain=false 2023-07-22 07:10:40,144 INFO [jenkins-hbase4:37061] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-22 07:10:40,146 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=89 updating hbase:meta row=591d20eb46a0fc5360008eff87b9f32b, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,34133,1690009825283 2023-07-22 07:10:40,146 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveA,,1690009839847.591d20eb46a0fc5360008eff87b9f32b.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1690009840146"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690009840146"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690009840146"}]},"ts":"1690009840146"} 2023-07-22 07:10:40,148 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=90, ppid=89, state=RUNNABLE; OpenRegionProcedure 591d20eb46a0fc5360008eff87b9f32b, server=jenkins-hbase4.apache.org,34133,1690009825283}] 2023-07-22 07:10:40,157 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.MasterRpcServices(1230): Checking to see if procedure is done pid=88 2023-07-22 07:10:40,305 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open GrouptestMultiTableMoveA,,1690009839847.591d20eb46a0fc5360008eff87b9f32b. 2023-07-22 07:10:40,305 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 591d20eb46a0fc5360008eff87b9f32b, NAME => 'GrouptestMultiTableMoveA,,1690009839847.591d20eb46a0fc5360008eff87b9f32b.', STARTKEY => '', ENDKEY => ''} 2023-07-22 07:10:40,305 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table GrouptestMultiTableMoveA 591d20eb46a0fc5360008eff87b9f32b 2023-07-22 07:10:40,305 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated GrouptestMultiTableMoveA,,1690009839847.591d20eb46a0fc5360008eff87b9f32b.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-22 07:10:40,305 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 591d20eb46a0fc5360008eff87b9f32b 2023-07-22 07:10:40,305 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 591d20eb46a0fc5360008eff87b9f32b 2023-07-22 07:10:40,307 INFO [StoreOpener-591d20eb46a0fc5360008eff87b9f32b-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 591d20eb46a0fc5360008eff87b9f32b 2023-07-22 07:10:40,310 DEBUG [StoreOpener-591d20eb46a0fc5360008eff87b9f32b-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/data/default/GrouptestMultiTableMoveA/591d20eb46a0fc5360008eff87b9f32b/f 2023-07-22 07:10:40,310 DEBUG [StoreOpener-591d20eb46a0fc5360008eff87b9f32b-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/data/default/GrouptestMultiTableMoveA/591d20eb46a0fc5360008eff87b9f32b/f 2023-07-22 07:10:40,310 INFO [StoreOpener-591d20eb46a0fc5360008eff87b9f32b-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 591d20eb46a0fc5360008eff87b9f32b columnFamilyName f 2023-07-22 07:10:40,311 INFO [StoreOpener-591d20eb46a0fc5360008eff87b9f32b-1] regionserver.HStore(310): Store=591d20eb46a0fc5360008eff87b9f32b/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-22 07:10:40,312 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/data/default/GrouptestMultiTableMoveA/591d20eb46a0fc5360008eff87b9f32b 2023-07-22 07:10:40,313 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/data/default/GrouptestMultiTableMoveA/591d20eb46a0fc5360008eff87b9f32b 2023-07-22 07:10:40,317 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 591d20eb46a0fc5360008eff87b9f32b 2023-07-22 07:10:40,321 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/data/default/GrouptestMultiTableMoveA/591d20eb46a0fc5360008eff87b9f32b/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-22 07:10:40,322 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 591d20eb46a0fc5360008eff87b9f32b; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10072532160, jitterRate=-0.0619223415851593}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-22 07:10:40,322 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 591d20eb46a0fc5360008eff87b9f32b: 2023-07-22 07:10:40,323 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for GrouptestMultiTableMoveA,,1690009839847.591d20eb46a0fc5360008eff87b9f32b., pid=90, masterSystemTime=1690009840300 2023-07-22 07:10:40,324 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for GrouptestMultiTableMoveA,,1690009839847.591d20eb46a0fc5360008eff87b9f32b. 2023-07-22 07:10:40,325 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened GrouptestMultiTableMoveA,,1690009839847.591d20eb46a0fc5360008eff87b9f32b. 2023-07-22 07:10:40,325 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=89 updating hbase:meta row=591d20eb46a0fc5360008eff87b9f32b, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,34133,1690009825283 2023-07-22 07:10:40,325 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"GrouptestMultiTableMoveA,,1690009839847.591d20eb46a0fc5360008eff87b9f32b.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1690009840325"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690009840325"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690009840325"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690009840325"}]},"ts":"1690009840325"} 2023-07-22 07:10:40,330 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=90, resume processing ppid=89 2023-07-22 07:10:40,330 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=90, ppid=89, state=SUCCESS; OpenRegionProcedure 591d20eb46a0fc5360008eff87b9f32b, server=jenkins-hbase4.apache.org,34133,1690009825283 in 179 msec 2023-07-22 07:10:40,331 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=89, resume processing ppid=88 2023-07-22 07:10:40,331 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=89, ppid=88, state=SUCCESS; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=591d20eb46a0fc5360008eff87b9f32b, ASSIGN in 343 msec 2023-07-22 07:10:40,332 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=88, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveA execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-22 07:10:40,332 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveA","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690009840332"}]},"ts":"1690009840332"} 2023-07-22 07:10:40,334 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveA, state=ENABLED in hbase:meta 2023-07-22 07:10:40,336 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=88, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveA execute state=CREATE_TABLE_POST_OPERATION 2023-07-22 07:10:40,338 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=88, state=SUCCESS; CreateTableProcedure table=GrouptestMultiTableMoveA in 489 msec 2023-07-22 07:10:40,459 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.MasterRpcServices(1230): Checking to see if procedure is done pid=88 2023-07-22 07:10:40,460 INFO [Listener at localhost/46507] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:GrouptestMultiTableMoveA, procId: 88 completed 2023-07-22 07:10:40,460 DEBUG [Listener at localhost/46507] hbase.HBaseTestingUtility(3430): Waiting until all regions of table GrouptestMultiTableMoveA get assigned. Timeout = 60000ms 2023-07-22 07:10:40,460 INFO [Listener at localhost/46507] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-22 07:10:40,465 INFO [Listener at localhost/46507] hbase.HBaseTestingUtility(3484): All regions for table GrouptestMultiTableMoveA assigned to meta. Checking AM states. 2023-07-22 07:10:40,465 INFO [Listener at localhost/46507] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-22 07:10:40,465 INFO [Listener at localhost/46507] hbase.HBaseTestingUtility(3504): All regions for table GrouptestMultiTableMoveA assigned. 2023-07-22 07:10:40,467 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'GrouptestMultiTableMoveB', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-22 07:10:40,468 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] procedure2.ProcedureExecutor(1029): Stored pid=91, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=GrouptestMultiTableMoveB 2023-07-22 07:10:40,471 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=91, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveB execute state=CREATE_TABLE_PRE_OPERATION 2023-07-22 07:10:40,471 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "GrouptestMultiTableMoveB" procId is: 91 2023-07-22 07:10:40,472 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.MasterRpcServices(1230): Checking to see if procedure is done pid=91 2023-07-22 07:10:40,473 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-22 07:10:40,474 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_1841899830 2023-07-22 07:10:40,474 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-22 07:10:40,474 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-22 07:10:40,477 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=91, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveB execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-22 07:10:40,479 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/.tmp/data/default/GrouptestMultiTableMoveB/3fa56e4021041d54f0f41ae9d297fd8d 2023-07-22 07:10:40,480 DEBUG [HFileArchiver-1] backup.HFileArchiver(153): Directory hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/.tmp/data/default/GrouptestMultiTableMoveB/3fa56e4021041d54f0f41ae9d297fd8d empty. 2023-07-22 07:10:40,481 DEBUG [HFileArchiver-1] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/.tmp/data/default/GrouptestMultiTableMoveB/3fa56e4021041d54f0f41ae9d297fd8d 2023-07-22 07:10:40,481 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(328): Archived GrouptestMultiTableMoveB regions 2023-07-22 07:10:40,507 DEBUG [PEWorker-4] util.FSTableDescriptors(570): Wrote into hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/.tmp/data/default/GrouptestMultiTableMoveB/.tabledesc/.tableinfo.0000000001 2023-07-22 07:10:40,508 INFO [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(7675): creating {ENCODED => 3fa56e4021041d54f0f41ae9d297fd8d, NAME => 'GrouptestMultiTableMoveB,,1690009840467.3fa56e4021041d54f0f41ae9d297fd8d.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='GrouptestMultiTableMoveB', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/.tmp 2023-07-22 07:10:40,532 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(866): Instantiated GrouptestMultiTableMoveB,,1690009840467.3fa56e4021041d54f0f41ae9d297fd8d.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-22 07:10:40,532 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(1604): Closing 3fa56e4021041d54f0f41ae9d297fd8d, disabling compactions & flushes 2023-07-22 07:10:40,532 INFO [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(1626): Closing region GrouptestMultiTableMoveB,,1690009840467.3fa56e4021041d54f0f41ae9d297fd8d. 2023-07-22 07:10:40,532 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on GrouptestMultiTableMoveB,,1690009840467.3fa56e4021041d54f0f41ae9d297fd8d. 2023-07-22 07:10:40,532 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(1714): Acquired close lock on GrouptestMultiTableMoveB,,1690009840467.3fa56e4021041d54f0f41ae9d297fd8d. after waiting 0 ms 2023-07-22 07:10:40,532 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(1724): Updates disabled for region GrouptestMultiTableMoveB,,1690009840467.3fa56e4021041d54f0f41ae9d297fd8d. 2023-07-22 07:10:40,533 INFO [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(1838): Closed GrouptestMultiTableMoveB,,1690009840467.3fa56e4021041d54f0f41ae9d297fd8d. 2023-07-22 07:10:40,533 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(1558): Region close journal for 3fa56e4021041d54f0f41ae9d297fd8d: 2023-07-22 07:10:40,535 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=91, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveB execute state=CREATE_TABLE_ADD_TO_META 2023-07-22 07:10:40,536 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"GrouptestMultiTableMoveB,,1690009840467.3fa56e4021041d54f0f41ae9d297fd8d.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1690009840536"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690009840536"}]},"ts":"1690009840536"} 2023-07-22 07:10:40,538 INFO [PEWorker-4] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-22 07:10:40,539 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=91, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveB execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-22 07:10:40,539 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveB","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690009840539"}]},"ts":"1690009840539"} 2023-07-22 07:10:40,540 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveB, state=ENABLING in hbase:meta 2023-07-22 07:10:40,543 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-22 07:10:40,543 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-22 07:10:40,543 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-22 07:10:40,543 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-22 07:10:40,543 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-22 07:10:40,544 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=92, ppid=91, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=3fa56e4021041d54f0f41ae9d297fd8d, ASSIGN}] 2023-07-22 07:10:40,545 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=92, ppid=91, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=3fa56e4021041d54f0f41ae9d297fd8d, ASSIGN 2023-07-22 07:10:40,546 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=92, ppid=91, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=3fa56e4021041d54f0f41ae9d297fd8d, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,34133,1690009825283; forceNewPlan=false, retain=false 2023-07-22 07:10:40,573 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.MasterRpcServices(1230): Checking to see if procedure is done pid=91 2023-07-22 07:10:40,696 INFO [jenkins-hbase4:37061] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-22 07:10:40,698 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=92 updating hbase:meta row=3fa56e4021041d54f0f41ae9d297fd8d, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,34133,1690009825283 2023-07-22 07:10:40,698 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveB,,1690009840467.3fa56e4021041d54f0f41ae9d297fd8d.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1690009840698"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690009840698"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690009840698"}]},"ts":"1690009840698"} 2023-07-22 07:10:40,700 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=93, ppid=92, state=RUNNABLE; OpenRegionProcedure 3fa56e4021041d54f0f41ae9d297fd8d, server=jenkins-hbase4.apache.org,34133,1690009825283}] 2023-07-22 07:10:40,774 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.MasterRpcServices(1230): Checking to see if procedure is done pid=91 2023-07-22 07:10:40,857 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open GrouptestMultiTableMoveB,,1690009840467.3fa56e4021041d54f0f41ae9d297fd8d. 2023-07-22 07:10:40,857 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 3fa56e4021041d54f0f41ae9d297fd8d, NAME => 'GrouptestMultiTableMoveB,,1690009840467.3fa56e4021041d54f0f41ae9d297fd8d.', STARTKEY => '', ENDKEY => ''} 2023-07-22 07:10:40,857 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table GrouptestMultiTableMoveB 3fa56e4021041d54f0f41ae9d297fd8d 2023-07-22 07:10:40,858 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated GrouptestMultiTableMoveB,,1690009840467.3fa56e4021041d54f0f41ae9d297fd8d.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-22 07:10:40,858 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 3fa56e4021041d54f0f41ae9d297fd8d 2023-07-22 07:10:40,858 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 3fa56e4021041d54f0f41ae9d297fd8d 2023-07-22 07:10:40,859 INFO [StoreOpener-3fa56e4021041d54f0f41ae9d297fd8d-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 3fa56e4021041d54f0f41ae9d297fd8d 2023-07-22 07:10:40,861 DEBUG [StoreOpener-3fa56e4021041d54f0f41ae9d297fd8d-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/data/default/GrouptestMultiTableMoveB/3fa56e4021041d54f0f41ae9d297fd8d/f 2023-07-22 07:10:40,861 DEBUG [StoreOpener-3fa56e4021041d54f0f41ae9d297fd8d-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/data/default/GrouptestMultiTableMoveB/3fa56e4021041d54f0f41ae9d297fd8d/f 2023-07-22 07:10:40,861 INFO [StoreOpener-3fa56e4021041d54f0f41ae9d297fd8d-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 3fa56e4021041d54f0f41ae9d297fd8d columnFamilyName f 2023-07-22 07:10:40,862 INFO [StoreOpener-3fa56e4021041d54f0f41ae9d297fd8d-1] regionserver.HStore(310): Store=3fa56e4021041d54f0f41ae9d297fd8d/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-22 07:10:40,863 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/data/default/GrouptestMultiTableMoveB/3fa56e4021041d54f0f41ae9d297fd8d 2023-07-22 07:10:40,863 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/data/default/GrouptestMultiTableMoveB/3fa56e4021041d54f0f41ae9d297fd8d 2023-07-22 07:10:40,866 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 3fa56e4021041d54f0f41ae9d297fd8d 2023-07-22 07:10:40,871 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/data/default/GrouptestMultiTableMoveB/3fa56e4021041d54f0f41ae9d297fd8d/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-22 07:10:40,871 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 3fa56e4021041d54f0f41ae9d297fd8d; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11145486560, jitterRate=0.038004323840141296}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-22 07:10:40,871 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 3fa56e4021041d54f0f41ae9d297fd8d: 2023-07-22 07:10:40,872 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for GrouptestMultiTableMoveB,,1690009840467.3fa56e4021041d54f0f41ae9d297fd8d., pid=93, masterSystemTime=1690009840853 2023-07-22 07:10:40,874 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for GrouptestMultiTableMoveB,,1690009840467.3fa56e4021041d54f0f41ae9d297fd8d. 2023-07-22 07:10:40,874 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened GrouptestMultiTableMoveB,,1690009840467.3fa56e4021041d54f0f41ae9d297fd8d. 2023-07-22 07:10:40,874 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=92 updating hbase:meta row=3fa56e4021041d54f0f41ae9d297fd8d, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,34133,1690009825283 2023-07-22 07:10:40,875 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"GrouptestMultiTableMoveB,,1690009840467.3fa56e4021041d54f0f41ae9d297fd8d.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1690009840874"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690009840874"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690009840874"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690009840874"}]},"ts":"1690009840874"} 2023-07-22 07:10:40,879 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=93, resume processing ppid=92 2023-07-22 07:10:40,879 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=93, ppid=92, state=SUCCESS; OpenRegionProcedure 3fa56e4021041d54f0f41ae9d297fd8d, server=jenkins-hbase4.apache.org,34133,1690009825283 in 176 msec 2023-07-22 07:10:40,881 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=92, resume processing ppid=91 2023-07-22 07:10:40,881 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=92, ppid=91, state=SUCCESS; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=3fa56e4021041d54f0f41ae9d297fd8d, ASSIGN in 336 msec 2023-07-22 07:10:40,882 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=91, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveB execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-22 07:10:40,882 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveB","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690009840882"}]},"ts":"1690009840882"} 2023-07-22 07:10:40,884 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveB, state=ENABLED in hbase:meta 2023-07-22 07:10:40,886 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=91, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveB execute state=CREATE_TABLE_POST_OPERATION 2023-07-22 07:10:40,888 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=91, state=SUCCESS; CreateTableProcedure table=GrouptestMultiTableMoveB in 418 msec 2023-07-22 07:10:41,076 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.MasterRpcServices(1230): Checking to see if procedure is done pid=91 2023-07-22 07:10:41,076 INFO [Listener at localhost/46507] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:GrouptestMultiTableMoveB, procId: 91 completed 2023-07-22 07:10:41,076 DEBUG [Listener at localhost/46507] hbase.HBaseTestingUtility(3430): Waiting until all regions of table GrouptestMultiTableMoveB get assigned. Timeout = 60000ms 2023-07-22 07:10:41,077 INFO [Listener at localhost/46507] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-22 07:10:41,080 INFO [Listener at localhost/46507] hbase.HBaseTestingUtility(3484): All regions for table GrouptestMultiTableMoveB assigned to meta. Checking AM states. 2023-07-22 07:10:41,080 INFO [Listener at localhost/46507] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-22 07:10:41,081 INFO [Listener at localhost/46507] hbase.HBaseTestingUtility(3504): All regions for table GrouptestMultiTableMoveB assigned. 2023-07-22 07:10:41,081 INFO [Listener at localhost/46507] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-22 07:10:41,094 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=GrouptestMultiTableMoveA 2023-07-22 07:10:41,095 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-22 07:10:41,096 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=GrouptestMultiTableMoveB 2023-07-22 07:10:41,096 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-22 07:10:41,096 INFO [Listener at localhost/46507] rsgroup.TestRSGroupsAdmin1(262): Moving table [GrouptestMultiTableMoveA,GrouptestMultiTableMoveB] to Group_testMultiTableMove_1841899830 2023-07-22 07:10:41,104 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [GrouptestMultiTableMoveB, GrouptestMultiTableMoveA] to rsgroup Group_testMultiTableMove_1841899830 2023-07-22 07:10:41,107 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-22 07:10:41,108 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_1841899830 2023-07-22 07:10:41,108 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-22 07:10:41,108 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-22 07:10:41,110 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminServer(339): Moving region(s) for table GrouptestMultiTableMoveB to RSGroup Group_testMultiTableMove_1841899830 2023-07-22 07:10:41,110 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminServer(345): Moving region 3fa56e4021041d54f0f41ae9d297fd8d to RSGroup Group_testMultiTableMove_1841899830 2023-07-22 07:10:41,111 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] procedure2.ProcedureExecutor(1029): Stored pid=94, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=3fa56e4021041d54f0f41ae9d297fd8d, REOPEN/MOVE 2023-07-22 07:10:41,111 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminServer(339): Moving region(s) for table GrouptestMultiTableMoveA to RSGroup Group_testMultiTableMove_1841899830 2023-07-22 07:10:41,112 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminServer(345): Moving region 591d20eb46a0fc5360008eff87b9f32b to RSGroup Group_testMultiTableMove_1841899830 2023-07-22 07:10:41,113 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=94, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=3fa56e4021041d54f0f41ae9d297fd8d, REOPEN/MOVE 2023-07-22 07:10:41,114 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] procedure2.ProcedureExecutor(1029): Stored pid=95, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=591d20eb46a0fc5360008eff87b9f32b, REOPEN/MOVE 2023-07-22 07:10:41,114 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminServer(286): Moving 2 region(s) to group Group_testMultiTableMove_1841899830, current retry=0 2023-07-22 07:10:41,115 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=95, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=591d20eb46a0fc5360008eff87b9f32b, REOPEN/MOVE 2023-07-22 07:10:41,115 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=94 updating hbase:meta row=3fa56e4021041d54f0f41ae9d297fd8d, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,34133,1690009825283 2023-07-22 07:10:41,115 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveB,,1690009840467.3fa56e4021041d54f0f41ae9d297fd8d.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1690009841114"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690009841114"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690009841114"}]},"ts":"1690009841114"} 2023-07-22 07:10:41,116 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=95 updating hbase:meta row=591d20eb46a0fc5360008eff87b9f32b, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,34133,1690009825283 2023-07-22 07:10:41,116 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveA,,1690009839847.591d20eb46a0fc5360008eff87b9f32b.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1690009841116"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690009841116"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690009841116"}]},"ts":"1690009841116"} 2023-07-22 07:10:41,120 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=96, ppid=94, state=RUNNABLE; CloseRegionProcedure 3fa56e4021041d54f0f41ae9d297fd8d, server=jenkins-hbase4.apache.org,34133,1690009825283}] 2023-07-22 07:10:41,122 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=97, ppid=95, state=RUNNABLE; CloseRegionProcedure 591d20eb46a0fc5360008eff87b9f32b, server=jenkins-hbase4.apache.org,34133,1690009825283}] 2023-07-22 07:10:41,274 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 3fa56e4021041d54f0f41ae9d297fd8d 2023-07-22 07:10:41,277 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 3fa56e4021041d54f0f41ae9d297fd8d, disabling compactions & flushes 2023-07-22 07:10:41,277 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region GrouptestMultiTableMoveB,,1690009840467.3fa56e4021041d54f0f41ae9d297fd8d. 2023-07-22 07:10:41,277 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on GrouptestMultiTableMoveB,,1690009840467.3fa56e4021041d54f0f41ae9d297fd8d. 2023-07-22 07:10:41,277 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on GrouptestMultiTableMoveB,,1690009840467.3fa56e4021041d54f0f41ae9d297fd8d. after waiting 0 ms 2023-07-22 07:10:41,277 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region GrouptestMultiTableMoveB,,1690009840467.3fa56e4021041d54f0f41ae9d297fd8d. 2023-07-22 07:10:41,282 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/data/default/GrouptestMultiTableMoveB/3fa56e4021041d54f0f41ae9d297fd8d/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-22 07:10:41,282 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed GrouptestMultiTableMoveB,,1690009840467.3fa56e4021041d54f0f41ae9d297fd8d. 2023-07-22 07:10:41,282 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 3fa56e4021041d54f0f41ae9d297fd8d: 2023-07-22 07:10:41,282 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 3fa56e4021041d54f0f41ae9d297fd8d move to jenkins-hbase4.apache.org,33357,1690009829125 record at close sequenceid=2 2023-07-22 07:10:41,284 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 3fa56e4021041d54f0f41ae9d297fd8d 2023-07-22 07:10:41,284 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 591d20eb46a0fc5360008eff87b9f32b 2023-07-22 07:10:41,284 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 591d20eb46a0fc5360008eff87b9f32b, disabling compactions & flushes 2023-07-22 07:10:41,284 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region GrouptestMultiTableMoveA,,1690009839847.591d20eb46a0fc5360008eff87b9f32b. 2023-07-22 07:10:41,284 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on GrouptestMultiTableMoveA,,1690009839847.591d20eb46a0fc5360008eff87b9f32b. 2023-07-22 07:10:41,284 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on GrouptestMultiTableMoveA,,1690009839847.591d20eb46a0fc5360008eff87b9f32b. after waiting 0 ms 2023-07-22 07:10:41,284 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region GrouptestMultiTableMoveA,,1690009839847.591d20eb46a0fc5360008eff87b9f32b. 2023-07-22 07:10:41,284 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=94 updating hbase:meta row=3fa56e4021041d54f0f41ae9d297fd8d, regionState=CLOSED 2023-07-22 07:10:41,285 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"GrouptestMultiTableMoveB,,1690009840467.3fa56e4021041d54f0f41ae9d297fd8d.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1690009841284"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690009841284"}]},"ts":"1690009841284"} 2023-07-22 07:10:41,288 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=96, resume processing ppid=94 2023-07-22 07:10:41,288 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=96, ppid=94, state=SUCCESS; CloseRegionProcedure 3fa56e4021041d54f0f41ae9d297fd8d, server=jenkins-hbase4.apache.org,34133,1690009825283 in 169 msec 2023-07-22 07:10:41,289 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/data/default/GrouptestMultiTableMoveA/591d20eb46a0fc5360008eff87b9f32b/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-22 07:10:41,289 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=94, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=3fa56e4021041d54f0f41ae9d297fd8d, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,33357,1690009829125; forceNewPlan=false, retain=false 2023-07-22 07:10:41,289 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed GrouptestMultiTableMoveA,,1690009839847.591d20eb46a0fc5360008eff87b9f32b. 2023-07-22 07:10:41,289 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 591d20eb46a0fc5360008eff87b9f32b: 2023-07-22 07:10:41,289 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 591d20eb46a0fc5360008eff87b9f32b move to jenkins-hbase4.apache.org,33357,1690009829125 record at close sequenceid=2 2023-07-22 07:10:41,291 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 591d20eb46a0fc5360008eff87b9f32b 2023-07-22 07:10:41,291 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=95 updating hbase:meta row=591d20eb46a0fc5360008eff87b9f32b, regionState=CLOSED 2023-07-22 07:10:41,291 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"GrouptestMultiTableMoveA,,1690009839847.591d20eb46a0fc5360008eff87b9f32b.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1690009841291"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690009841291"}]},"ts":"1690009841291"} 2023-07-22 07:10:41,297 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=97, resume processing ppid=95 2023-07-22 07:10:41,297 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=97, ppid=95, state=SUCCESS; CloseRegionProcedure 591d20eb46a0fc5360008eff87b9f32b, server=jenkins-hbase4.apache.org,34133,1690009825283 in 173 msec 2023-07-22 07:10:41,297 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=95, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=591d20eb46a0fc5360008eff87b9f32b, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,33357,1690009829125; forceNewPlan=false, retain=false 2023-07-22 07:10:41,440 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=94 updating hbase:meta row=3fa56e4021041d54f0f41ae9d297fd8d, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,33357,1690009829125 2023-07-22 07:10:41,440 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=95 updating hbase:meta row=591d20eb46a0fc5360008eff87b9f32b, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,33357,1690009829125 2023-07-22 07:10:41,440 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveB,,1690009840467.3fa56e4021041d54f0f41ae9d297fd8d.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1690009841440"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690009841440"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690009841440"}]},"ts":"1690009841440"} 2023-07-22 07:10:41,440 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveA,,1690009839847.591d20eb46a0fc5360008eff87b9f32b.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1690009841440"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690009841440"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690009841440"}]},"ts":"1690009841440"} 2023-07-22 07:10:41,442 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=98, ppid=94, state=RUNNABLE; OpenRegionProcedure 3fa56e4021041d54f0f41ae9d297fd8d, server=jenkins-hbase4.apache.org,33357,1690009829125}] 2023-07-22 07:10:41,443 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=99, ppid=95, state=RUNNABLE; OpenRegionProcedure 591d20eb46a0fc5360008eff87b9f32b, server=jenkins-hbase4.apache.org,33357,1690009829125}] 2023-07-22 07:10:41,599 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open GrouptestMultiTableMoveB,,1690009840467.3fa56e4021041d54f0f41ae9d297fd8d. 2023-07-22 07:10:41,600 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 3fa56e4021041d54f0f41ae9d297fd8d, NAME => 'GrouptestMultiTableMoveB,,1690009840467.3fa56e4021041d54f0f41ae9d297fd8d.', STARTKEY => '', ENDKEY => ''} 2023-07-22 07:10:41,600 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table GrouptestMultiTableMoveB 3fa56e4021041d54f0f41ae9d297fd8d 2023-07-22 07:10:41,600 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated GrouptestMultiTableMoveB,,1690009840467.3fa56e4021041d54f0f41ae9d297fd8d.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-22 07:10:41,600 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 3fa56e4021041d54f0f41ae9d297fd8d 2023-07-22 07:10:41,600 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 3fa56e4021041d54f0f41ae9d297fd8d 2023-07-22 07:10:41,603 INFO [StoreOpener-3fa56e4021041d54f0f41ae9d297fd8d-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 3fa56e4021041d54f0f41ae9d297fd8d 2023-07-22 07:10:41,604 DEBUG [StoreOpener-3fa56e4021041d54f0f41ae9d297fd8d-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/data/default/GrouptestMultiTableMoveB/3fa56e4021041d54f0f41ae9d297fd8d/f 2023-07-22 07:10:41,604 DEBUG [StoreOpener-3fa56e4021041d54f0f41ae9d297fd8d-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/data/default/GrouptestMultiTableMoveB/3fa56e4021041d54f0f41ae9d297fd8d/f 2023-07-22 07:10:41,605 INFO [StoreOpener-3fa56e4021041d54f0f41ae9d297fd8d-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 3fa56e4021041d54f0f41ae9d297fd8d columnFamilyName f 2023-07-22 07:10:41,605 INFO [StoreOpener-3fa56e4021041d54f0f41ae9d297fd8d-1] regionserver.HStore(310): Store=3fa56e4021041d54f0f41ae9d297fd8d/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-22 07:10:41,606 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/data/default/GrouptestMultiTableMoveB/3fa56e4021041d54f0f41ae9d297fd8d 2023-07-22 07:10:41,608 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/data/default/GrouptestMultiTableMoveB/3fa56e4021041d54f0f41ae9d297fd8d 2023-07-22 07:10:41,613 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 3fa56e4021041d54f0f41ae9d297fd8d 2023-07-22 07:10:41,615 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 3fa56e4021041d54f0f41ae9d297fd8d; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11678468000, jitterRate=0.08764208853244781}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-22 07:10:41,615 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 3fa56e4021041d54f0f41ae9d297fd8d: 2023-07-22 07:10:41,616 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for GrouptestMultiTableMoveB,,1690009840467.3fa56e4021041d54f0f41ae9d297fd8d., pid=98, masterSystemTime=1690009841594 2023-07-22 07:10:41,618 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for GrouptestMultiTableMoveB,,1690009840467.3fa56e4021041d54f0f41ae9d297fd8d. 2023-07-22 07:10:41,618 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened GrouptestMultiTableMoveB,,1690009840467.3fa56e4021041d54f0f41ae9d297fd8d. 2023-07-22 07:10:41,618 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open GrouptestMultiTableMoveA,,1690009839847.591d20eb46a0fc5360008eff87b9f32b. 2023-07-22 07:10:41,618 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 591d20eb46a0fc5360008eff87b9f32b, NAME => 'GrouptestMultiTableMoveA,,1690009839847.591d20eb46a0fc5360008eff87b9f32b.', STARTKEY => '', ENDKEY => ''} 2023-07-22 07:10:41,619 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table GrouptestMultiTableMoveA 591d20eb46a0fc5360008eff87b9f32b 2023-07-22 07:10:41,619 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated GrouptestMultiTableMoveA,,1690009839847.591d20eb46a0fc5360008eff87b9f32b.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-22 07:10:41,619 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 591d20eb46a0fc5360008eff87b9f32b 2023-07-22 07:10:41,619 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 591d20eb46a0fc5360008eff87b9f32b 2023-07-22 07:10:41,619 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=94 updating hbase:meta row=3fa56e4021041d54f0f41ae9d297fd8d, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,33357,1690009829125 2023-07-22 07:10:41,620 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"GrouptestMultiTableMoveB,,1690009840467.3fa56e4021041d54f0f41ae9d297fd8d.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1690009841619"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690009841619"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690009841619"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690009841619"}]},"ts":"1690009841619"} 2023-07-22 07:10:41,621 INFO [StoreOpener-591d20eb46a0fc5360008eff87b9f32b-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 591d20eb46a0fc5360008eff87b9f32b 2023-07-22 07:10:41,626 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=98, resume processing ppid=94 2023-07-22 07:10:41,626 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=98, ppid=94, state=SUCCESS; OpenRegionProcedure 3fa56e4021041d54f0f41ae9d297fd8d, server=jenkins-hbase4.apache.org,33357,1690009829125 in 180 msec 2023-07-22 07:10:41,628 DEBUG [StoreOpener-591d20eb46a0fc5360008eff87b9f32b-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/data/default/GrouptestMultiTableMoveA/591d20eb46a0fc5360008eff87b9f32b/f 2023-07-22 07:10:41,629 DEBUG [StoreOpener-591d20eb46a0fc5360008eff87b9f32b-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/data/default/GrouptestMultiTableMoveA/591d20eb46a0fc5360008eff87b9f32b/f 2023-07-22 07:10:41,629 INFO [StoreOpener-591d20eb46a0fc5360008eff87b9f32b-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 591d20eb46a0fc5360008eff87b9f32b columnFamilyName f 2023-07-22 07:10:41,630 INFO [StoreOpener-591d20eb46a0fc5360008eff87b9f32b-1] regionserver.HStore(310): Store=591d20eb46a0fc5360008eff87b9f32b/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-22 07:10:41,631 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/data/default/GrouptestMultiTableMoveA/591d20eb46a0fc5360008eff87b9f32b 2023-07-22 07:10:41,633 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/data/default/GrouptestMultiTableMoveA/591d20eb46a0fc5360008eff87b9f32b 2023-07-22 07:10:41,636 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 591d20eb46a0fc5360008eff87b9f32b 2023-07-22 07:10:41,637 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 591d20eb46a0fc5360008eff87b9f32b; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10515588800, jitterRate=-0.02065947651863098}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-22 07:10:41,637 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 591d20eb46a0fc5360008eff87b9f32b: 2023-07-22 07:10:41,638 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for GrouptestMultiTableMoveA,,1690009839847.591d20eb46a0fc5360008eff87b9f32b., pid=99, masterSystemTime=1690009841594 2023-07-22 07:10:41,639 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=94, state=SUCCESS; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=3fa56e4021041d54f0f41ae9d297fd8d, REOPEN/MOVE in 516 msec 2023-07-22 07:10:41,640 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=95 updating hbase:meta row=591d20eb46a0fc5360008eff87b9f32b, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,33357,1690009829125 2023-07-22 07:10:41,640 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for GrouptestMultiTableMoveA,,1690009839847.591d20eb46a0fc5360008eff87b9f32b. 2023-07-22 07:10:41,640 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"GrouptestMultiTableMoveA,,1690009839847.591d20eb46a0fc5360008eff87b9f32b.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1690009841640"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690009841640"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690009841640"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690009841640"}]},"ts":"1690009841640"} 2023-07-22 07:10:41,640 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened GrouptestMultiTableMoveA,,1690009839847.591d20eb46a0fc5360008eff87b9f32b. 2023-07-22 07:10:41,645 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=99, resume processing ppid=95 2023-07-22 07:10:41,645 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=99, ppid=95, state=SUCCESS; OpenRegionProcedure 591d20eb46a0fc5360008eff87b9f32b, server=jenkins-hbase4.apache.org,33357,1690009829125 in 199 msec 2023-07-22 07:10:41,648 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=95, state=SUCCESS; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=591d20eb46a0fc5360008eff87b9f32b, REOPEN/MOVE in 534 msec 2023-07-22 07:10:42,115 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] procedure.ProcedureSyncWait(216): waitFor pid=94 2023-07-22 07:10:42,115 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminServer(369): All regions from table(s) [GrouptestMultiTableMoveB, GrouptestMultiTableMoveA] moved to target group Group_testMultiTableMove_1841899830. 2023-07-22 07:10:42,115 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-22 07:10:42,118 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-22 07:10:42,119 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-22 07:10:42,121 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=GrouptestMultiTableMoveA 2023-07-22 07:10:42,121 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-22 07:10:42,122 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=GrouptestMultiTableMoveB 2023-07-22 07:10:42,122 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-22 07:10:42,123 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-22 07:10:42,123 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-22 07:10:42,124 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=Group_testMultiTableMove_1841899830 2023-07-22 07:10:42,124 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-22 07:10:42,126 INFO [Listener at localhost/46507] client.HBaseAdmin$15(890): Started disable of GrouptestMultiTableMoveA 2023-07-22 07:10:42,127 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable GrouptestMultiTableMoveA 2023-07-22 07:10:42,128 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] procedure2.ProcedureExecutor(1029): Stored pid=100, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=GrouptestMultiTableMoveA 2023-07-22 07:10:42,130 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.MasterRpcServices(1230): Checking to see if procedure is done pid=100 2023-07-22 07:10:42,130 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveA","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690009842130"}]},"ts":"1690009842130"} 2023-07-22 07:10:42,132 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveA, state=DISABLING in hbase:meta 2023-07-22 07:10:42,133 INFO [PEWorker-3] procedure.DisableTableProcedure(293): Set GrouptestMultiTableMoveA to state=DISABLING 2023-07-22 07:10:42,134 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=101, ppid=100, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=591d20eb46a0fc5360008eff87b9f32b, UNASSIGN}] 2023-07-22 07:10:42,135 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=101, ppid=100, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=591d20eb46a0fc5360008eff87b9f32b, UNASSIGN 2023-07-22 07:10:42,136 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=101 updating hbase:meta row=591d20eb46a0fc5360008eff87b9f32b, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,33357,1690009829125 2023-07-22 07:10:42,136 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveA,,1690009839847.591d20eb46a0fc5360008eff87b9f32b.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1690009842136"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690009842136"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690009842136"}]},"ts":"1690009842136"} 2023-07-22 07:10:42,137 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=102, ppid=101, state=RUNNABLE; CloseRegionProcedure 591d20eb46a0fc5360008eff87b9f32b, server=jenkins-hbase4.apache.org,33357,1690009829125}] 2023-07-22 07:10:42,231 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.MasterRpcServices(1230): Checking to see if procedure is done pid=100 2023-07-22 07:10:42,289 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 591d20eb46a0fc5360008eff87b9f32b 2023-07-22 07:10:42,290 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 591d20eb46a0fc5360008eff87b9f32b, disabling compactions & flushes 2023-07-22 07:10:42,290 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region GrouptestMultiTableMoveA,,1690009839847.591d20eb46a0fc5360008eff87b9f32b. 2023-07-22 07:10:42,290 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on GrouptestMultiTableMoveA,,1690009839847.591d20eb46a0fc5360008eff87b9f32b. 2023-07-22 07:10:42,290 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on GrouptestMultiTableMoveA,,1690009839847.591d20eb46a0fc5360008eff87b9f32b. after waiting 0 ms 2023-07-22 07:10:42,290 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region GrouptestMultiTableMoveA,,1690009839847.591d20eb46a0fc5360008eff87b9f32b. 2023-07-22 07:10:42,294 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/data/default/GrouptestMultiTableMoveA/591d20eb46a0fc5360008eff87b9f32b/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-22 07:10:42,295 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed GrouptestMultiTableMoveA,,1690009839847.591d20eb46a0fc5360008eff87b9f32b. 2023-07-22 07:10:42,295 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 591d20eb46a0fc5360008eff87b9f32b: 2023-07-22 07:10:42,297 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 591d20eb46a0fc5360008eff87b9f32b 2023-07-22 07:10:42,297 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=101 updating hbase:meta row=591d20eb46a0fc5360008eff87b9f32b, regionState=CLOSED 2023-07-22 07:10:42,297 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"GrouptestMultiTableMoveA,,1690009839847.591d20eb46a0fc5360008eff87b9f32b.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1690009842297"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690009842297"}]},"ts":"1690009842297"} 2023-07-22 07:10:42,300 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=102, resume processing ppid=101 2023-07-22 07:10:42,300 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=102, ppid=101, state=SUCCESS; CloseRegionProcedure 591d20eb46a0fc5360008eff87b9f32b, server=jenkins-hbase4.apache.org,33357,1690009829125 in 162 msec 2023-07-22 07:10:42,302 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=101, resume processing ppid=100 2023-07-22 07:10:42,302 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=101, ppid=100, state=SUCCESS; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=591d20eb46a0fc5360008eff87b9f32b, UNASSIGN in 166 msec 2023-07-22 07:10:42,302 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveA","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690009842302"}]},"ts":"1690009842302"} 2023-07-22 07:10:42,304 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveA, state=DISABLED in hbase:meta 2023-07-22 07:10:42,305 INFO [PEWorker-3] procedure.DisableTableProcedure(305): Set GrouptestMultiTableMoveA to state=DISABLED 2023-07-22 07:10:42,307 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=100, state=SUCCESS; DisableTableProcedure table=GrouptestMultiTableMoveA in 179 msec 2023-07-22 07:10:42,432 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.MasterRpcServices(1230): Checking to see if procedure is done pid=100 2023-07-22 07:10:42,432 INFO [Listener at localhost/46507] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:GrouptestMultiTableMoveA, procId: 100 completed 2023-07-22 07:10:42,433 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.HMaster$5(2228): Client=jenkins//172.31.14.131 delete GrouptestMultiTableMoveA 2023-07-22 07:10:42,434 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] procedure2.ProcedureExecutor(1029): Stored pid=103, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=GrouptestMultiTableMoveA 2023-07-22 07:10:42,436 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=103, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveA 2023-07-22 07:10:42,436 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 'GrouptestMultiTableMoveA' from rsgroup 'Group_testMultiTableMove_1841899830' 2023-07-22 07:10:42,439 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-22 07:10:42,439 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=103, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveA 2023-07-22 07:10:42,440 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_1841899830 2023-07-22 07:10:42,440 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-22 07:10:42,441 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-22 07:10:42,443 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.MasterRpcServices(1230): Checking to see if procedure is done pid=103 2023-07-22 07:10:42,444 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/.tmp/data/default/GrouptestMultiTableMoveA/591d20eb46a0fc5360008eff87b9f32b 2023-07-22 07:10:42,446 DEBUG [HFileArchiver-3] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/.tmp/data/default/GrouptestMultiTableMoveA/591d20eb46a0fc5360008eff87b9f32b/f, FileablePath, hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/.tmp/data/default/GrouptestMultiTableMoveA/591d20eb46a0fc5360008eff87b9f32b/recovered.edits] 2023-07-22 07:10:42,451 DEBUG [HFileArchiver-3] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/.tmp/data/default/GrouptestMultiTableMoveA/591d20eb46a0fc5360008eff87b9f32b/recovered.edits/7.seqid to hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/archive/data/default/GrouptestMultiTableMoveA/591d20eb46a0fc5360008eff87b9f32b/recovered.edits/7.seqid 2023-07-22 07:10:42,452 DEBUG [HFileArchiver-3] backup.HFileArchiver(596): Deleted hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/.tmp/data/default/GrouptestMultiTableMoveA/591d20eb46a0fc5360008eff87b9f32b 2023-07-22 07:10:42,452 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(328): Archived GrouptestMultiTableMoveA regions 2023-07-22 07:10:42,454 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=103, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveA 2023-07-22 07:10:42,458 WARN [PEWorker-2] procedure.DeleteTableProcedure(384): Deleting some vestigial 1 rows of GrouptestMultiTableMoveA from hbase:meta 2023-07-22 07:10:42,459 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(421): Removing 'GrouptestMultiTableMoveA' descriptor. 2023-07-22 07:10:42,460 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=103, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveA 2023-07-22 07:10:42,460 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(411): Removing 'GrouptestMultiTableMoveA' from region states. 2023-07-22 07:10:42,461 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"GrouptestMultiTableMoveA,,1690009839847.591d20eb46a0fc5360008eff87b9f32b.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1690009842461"}]},"ts":"9223372036854775807"} 2023-07-22 07:10:42,462 INFO [PEWorker-2] hbase.MetaTableAccessor(1788): Deleted 1 regions from META 2023-07-22 07:10:42,462 DEBUG [PEWorker-2] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => 591d20eb46a0fc5360008eff87b9f32b, NAME => 'GrouptestMultiTableMoveA,,1690009839847.591d20eb46a0fc5360008eff87b9f32b.', STARTKEY => '', ENDKEY => ''}] 2023-07-22 07:10:42,462 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(415): Marking 'GrouptestMultiTableMoveA' as deleted. 2023-07-22 07:10:42,462 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"GrouptestMultiTableMoveA","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1690009842462"}]},"ts":"9223372036854775807"} 2023-07-22 07:10:42,463 INFO [PEWorker-2] hbase.MetaTableAccessor(1658): Deleted table GrouptestMultiTableMoveA state from META 2023-07-22 07:10:42,465 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(130): Finished pid=103, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveA 2023-07-22 07:10:42,466 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=103, state=SUCCESS; DeleteTableProcedure table=GrouptestMultiTableMoveA in 31 msec 2023-07-22 07:10:42,545 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.MasterRpcServices(1230): Checking to see if procedure is done pid=103 2023-07-22 07:10:42,545 INFO [Listener at localhost/46507] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: default:GrouptestMultiTableMoveA, procId: 103 completed 2023-07-22 07:10:42,546 INFO [Listener at localhost/46507] client.HBaseAdmin$15(890): Started disable of GrouptestMultiTableMoveB 2023-07-22 07:10:42,546 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable GrouptestMultiTableMoveB 2023-07-22 07:10:42,547 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] procedure2.ProcedureExecutor(1029): Stored pid=104, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=GrouptestMultiTableMoveB 2023-07-22 07:10:42,550 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.MasterRpcServices(1230): Checking to see if procedure is done pid=104 2023-07-22 07:10:42,551 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveB","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690009842551"}]},"ts":"1690009842551"} 2023-07-22 07:10:42,554 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveB, state=DISABLING in hbase:meta 2023-07-22 07:10:42,556 INFO [PEWorker-5] procedure.DisableTableProcedure(293): Set GrouptestMultiTableMoveB to state=DISABLING 2023-07-22 07:10:42,557 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=105, ppid=104, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=3fa56e4021041d54f0f41ae9d297fd8d, UNASSIGN}] 2023-07-22 07:10:42,559 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=105, ppid=104, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=3fa56e4021041d54f0f41ae9d297fd8d, UNASSIGN 2023-07-22 07:10:42,560 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=105 updating hbase:meta row=3fa56e4021041d54f0f41ae9d297fd8d, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,33357,1690009829125 2023-07-22 07:10:42,560 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveB,,1690009840467.3fa56e4021041d54f0f41ae9d297fd8d.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1690009842560"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690009842560"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690009842560"}]},"ts":"1690009842560"} 2023-07-22 07:10:42,562 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=106, ppid=105, state=RUNNABLE; CloseRegionProcedure 3fa56e4021041d54f0f41ae9d297fd8d, server=jenkins-hbase4.apache.org,33357,1690009829125}] 2023-07-22 07:10:42,651 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.MasterRpcServices(1230): Checking to see if procedure is done pid=104 2023-07-22 07:10:42,714 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 3fa56e4021041d54f0f41ae9d297fd8d 2023-07-22 07:10:42,716 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 3fa56e4021041d54f0f41ae9d297fd8d, disabling compactions & flushes 2023-07-22 07:10:42,716 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region GrouptestMultiTableMoveB,,1690009840467.3fa56e4021041d54f0f41ae9d297fd8d. 2023-07-22 07:10:42,716 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on GrouptestMultiTableMoveB,,1690009840467.3fa56e4021041d54f0f41ae9d297fd8d. 2023-07-22 07:10:42,716 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on GrouptestMultiTableMoveB,,1690009840467.3fa56e4021041d54f0f41ae9d297fd8d. after waiting 0 ms 2023-07-22 07:10:42,716 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region GrouptestMultiTableMoveB,,1690009840467.3fa56e4021041d54f0f41ae9d297fd8d. 2023-07-22 07:10:42,720 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/data/default/GrouptestMultiTableMoveB/3fa56e4021041d54f0f41ae9d297fd8d/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-22 07:10:42,720 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed GrouptestMultiTableMoveB,,1690009840467.3fa56e4021041d54f0f41ae9d297fd8d. 2023-07-22 07:10:42,720 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 3fa56e4021041d54f0f41ae9d297fd8d: 2023-07-22 07:10:42,722 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 3fa56e4021041d54f0f41ae9d297fd8d 2023-07-22 07:10:42,722 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=105 updating hbase:meta row=3fa56e4021041d54f0f41ae9d297fd8d, regionState=CLOSED 2023-07-22 07:10:42,722 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"GrouptestMultiTableMoveB,,1690009840467.3fa56e4021041d54f0f41ae9d297fd8d.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1690009842722"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690009842722"}]},"ts":"1690009842722"} 2023-07-22 07:10:42,725 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=106, resume processing ppid=105 2023-07-22 07:10:42,725 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=106, ppid=105, state=SUCCESS; CloseRegionProcedure 3fa56e4021041d54f0f41ae9d297fd8d, server=jenkins-hbase4.apache.org,33357,1690009829125 in 162 msec 2023-07-22 07:10:42,727 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=105, resume processing ppid=104 2023-07-22 07:10:42,727 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=105, ppid=104, state=SUCCESS; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=3fa56e4021041d54f0f41ae9d297fd8d, UNASSIGN in 168 msec 2023-07-22 07:10:42,727 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveB","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690009842727"}]},"ts":"1690009842727"} 2023-07-22 07:10:42,729 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveB, state=DISABLED in hbase:meta 2023-07-22 07:10:42,731 INFO [PEWorker-5] procedure.DisableTableProcedure(305): Set GrouptestMultiTableMoveB to state=DISABLED 2023-07-22 07:10:42,732 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=104, state=SUCCESS; DisableTableProcedure table=GrouptestMultiTableMoveB in 185 msec 2023-07-22 07:10:42,852 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.MasterRpcServices(1230): Checking to see if procedure is done pid=104 2023-07-22 07:10:42,853 INFO [Listener at localhost/46507] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:GrouptestMultiTableMoveB, procId: 104 completed 2023-07-22 07:10:42,854 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.HMaster$5(2228): Client=jenkins//172.31.14.131 delete GrouptestMultiTableMoveB 2023-07-22 07:10:43,055 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] procedure2.ProcedureExecutor(1029): Stored pid=107, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=GrouptestMultiTableMoveB 2023-07-22 07:10:43,057 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=107, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveB 2023-07-22 07:10:43,057 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 'GrouptestMultiTableMoveB' from rsgroup 'Group_testMultiTableMove_1841899830' 2023-07-22 07:10:43,058 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=107, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveB 2023-07-22 07:10:43,060 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-22 07:10:43,060 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_1841899830 2023-07-22 07:10:43,060 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-22 07:10:43,061 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-22 07:10:43,063 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/.tmp/data/default/GrouptestMultiTableMoveB/3fa56e4021041d54f0f41ae9d297fd8d 2023-07-22 07:10:43,065 DEBUG [HFileArchiver-4] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/.tmp/data/default/GrouptestMultiTableMoveB/3fa56e4021041d54f0f41ae9d297fd8d/f, FileablePath, hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/.tmp/data/default/GrouptestMultiTableMoveB/3fa56e4021041d54f0f41ae9d297fd8d/recovered.edits] 2023-07-22 07:10:43,069 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.MasterRpcServices(1230): Checking to see if procedure is done pid=107 2023-07-22 07:10:43,074 DEBUG [HFileArchiver-4] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/.tmp/data/default/GrouptestMultiTableMoveB/3fa56e4021041d54f0f41ae9d297fd8d/recovered.edits/7.seqid to hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/archive/data/default/GrouptestMultiTableMoveB/3fa56e4021041d54f0f41ae9d297fd8d/recovered.edits/7.seqid 2023-07-22 07:10:43,075 DEBUG [HFileArchiver-4] backup.HFileArchiver(596): Deleted hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/.tmp/data/default/GrouptestMultiTableMoveB/3fa56e4021041d54f0f41ae9d297fd8d 2023-07-22 07:10:43,075 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(328): Archived GrouptestMultiTableMoveB regions 2023-07-22 07:10:43,078 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=107, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveB 2023-07-22 07:10:43,082 WARN [PEWorker-4] procedure.DeleteTableProcedure(384): Deleting some vestigial 1 rows of GrouptestMultiTableMoveB from hbase:meta 2023-07-22 07:10:43,086 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(421): Removing 'GrouptestMultiTableMoveB' descriptor. 2023-07-22 07:10:43,089 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=107, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveB 2023-07-22 07:10:43,089 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(411): Removing 'GrouptestMultiTableMoveB' from region states. 2023-07-22 07:10:43,089 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"GrouptestMultiTableMoveB,,1690009840467.3fa56e4021041d54f0f41ae9d297fd8d.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1690009843089"}]},"ts":"9223372036854775807"} 2023-07-22 07:10:43,091 INFO [PEWorker-4] hbase.MetaTableAccessor(1788): Deleted 1 regions from META 2023-07-22 07:10:43,091 DEBUG [PEWorker-4] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => 3fa56e4021041d54f0f41ae9d297fd8d, NAME => 'GrouptestMultiTableMoveB,,1690009840467.3fa56e4021041d54f0f41ae9d297fd8d.', STARTKEY => '', ENDKEY => ''}] 2023-07-22 07:10:43,091 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(415): Marking 'GrouptestMultiTableMoveB' as deleted. 2023-07-22 07:10:43,091 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"GrouptestMultiTableMoveB","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1690009843091"}]},"ts":"9223372036854775807"} 2023-07-22 07:10:43,094 INFO [PEWorker-4] hbase.MetaTableAccessor(1658): Deleted table GrouptestMultiTableMoveB state from META 2023-07-22 07:10:43,099 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(130): Finished pid=107, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveB 2023-07-22 07:10:43,102 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=107, state=SUCCESS; DeleteTableProcedure table=GrouptestMultiTableMoveB in 49 msec 2023-07-22 07:10:43,170 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.MasterRpcServices(1230): Checking to see if procedure is done pid=107 2023-07-22 07:10:43,170 INFO [Listener at localhost/46507] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: default:GrouptestMultiTableMoveB, procId: 107 completed 2023-07-22 07:10:43,174 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-22 07:10:43,174 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-22 07:10:43,175 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-22 07:10:43,176 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-22 07:10:43,176 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-22 07:10:43,177 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:33357] to rsgroup default 2023-07-22 07:10:43,179 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-22 07:10:43,179 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_1841899830 2023-07-22 07:10:43,180 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-22 07:10:43,180 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-22 07:10:43,182 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group Group_testMultiTableMove_1841899830, current retry=0 2023-07-22 07:10:43,182 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,33357,1690009829125] are moved back to Group_testMultiTableMove_1841899830 2023-07-22 07:10:43,182 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminServer(438): Move servers done: Group_testMultiTableMove_1841899830 => default 2023-07-22 07:10:43,182 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-22 07:10:43,183 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup Group_testMultiTableMove_1841899830 2023-07-22 07:10:43,188 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-22 07:10:43,188 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-22 07:10:43,188 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-22 07:10:43,190 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-22 07:10:43,191 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-22 07:10:43,191 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-22 07:10:43,191 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-22 07:10:43,192 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-22 07:10:43,192 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-22 07:10:43,192 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-22 07:10:43,196 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-22 07:10:43,196 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-22 07:10:43,198 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-22 07:10:43,203 INFO [Listener at localhost/46507] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-22 07:10:43,204 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-22 07:10:43,206 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-22 07:10:43,206 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-22 07:10:43,208 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-22 07:10:43,209 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-22 07:10:43,213 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-22 07:10:43,213 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-22 07:10:43,215 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:37061] to rsgroup master 2023-07-22 07:10:43,215 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37061 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-22 07:10:43,215 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] ipc.CallRunner(144): callId: 512 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:38908 deadline: 1690011043215, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37061 is either offline or it does not exist. 2023-07-22 07:10:43,216 WARN [Listener at localhost/46507] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37061 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37061 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-22 07:10:43,217 INFO [Listener at localhost/46507] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-22 07:10:43,218 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-22 07:10:43,218 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-22 07:10:43,218 INFO [Listener at localhost/46507] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:33357, jenkins-hbase4.apache.org:34133, jenkins-hbase4.apache.org:39057, jenkins-hbase4.apache.org:41787], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-22 07:10:43,219 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-22 07:10:43,219 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-22 07:10:43,238 INFO [Listener at localhost/46507] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testMultiTableMove Thread=509 (was 512), OpenFileDescriptor=789 (was 794), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=404 (was 404), ProcessCount=180 (was 180), AvailableMemoryMB=6781 (was 6825) 2023-07-22 07:10:43,238 WARN [Listener at localhost/46507] hbase.ResourceChecker(130): Thread=509 is superior to 500 2023-07-22 07:10:43,259 INFO [Listener at localhost/46507] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testRenameRSGroupConstraints Thread=509, OpenFileDescriptor=789, MaxFileDescriptor=60000, SystemLoadAverage=404, ProcessCount=180, AvailableMemoryMB=6781 2023-07-22 07:10:43,260 WARN [Listener at localhost/46507] hbase.ResourceChecker(130): Thread=509 is superior to 500 2023-07-22 07:10:43,260 INFO [Listener at localhost/46507] rsgroup.TestRSGroupsBase(132): testRenameRSGroupConstraints 2023-07-22 07:10:43,264 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-22 07:10:43,265 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-22 07:10:43,265 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-22 07:10:43,265 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-22 07:10:43,265 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-22 07:10:43,266 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-22 07:10:43,266 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-22 07:10:43,267 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-22 07:10:43,270 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-22 07:10:43,270 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-22 07:10:43,272 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-22 07:10:43,275 INFO [Listener at localhost/46507] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-22 07:10:43,275 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-22 07:10:43,277 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-22 07:10:43,278 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-22 07:10:43,280 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-22 07:10:43,281 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-22 07:10:43,284 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-22 07:10:43,284 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-22 07:10:43,286 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:37061] to rsgroup master 2023-07-22 07:10:43,287 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37061 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-22 07:10:43,287 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] ipc.CallRunner(144): callId: 540 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:38908 deadline: 1690011043286, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37061 is either offline or it does not exist. 2023-07-22 07:10:43,291 WARN [Listener at localhost/46507] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37061 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37061 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-22 07:10:43,293 INFO [Listener at localhost/46507] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-22 07:10:43,294 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-22 07:10:43,294 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-22 07:10:43,294 INFO [Listener at localhost/46507] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:33357, jenkins-hbase4.apache.org:34133, jenkins-hbase4.apache.org:39057, jenkins-hbase4.apache.org:41787], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-22 07:10:43,295 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-22 07:10:43,295 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-22 07:10:43,296 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-22 07:10:43,296 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-22 07:10:43,297 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup oldGroup 2023-07-22 07:10:43,300 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-22 07:10:43,302 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldGroup 2023-07-22 07:10:43,303 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-22 07:10:43,304 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-22 07:10:43,312 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-22 07:10:43,315 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-22 07:10:43,315 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-22 07:10:43,318 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:34133, jenkins-hbase4.apache.org:33357] to rsgroup oldGroup 2023-07-22 07:10:43,320 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-22 07:10:43,320 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldGroup 2023-07-22 07:10:43,321 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-22 07:10:43,321 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-22 07:10:43,322 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group default, current retry=0 2023-07-22 07:10:43,323 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,33357,1690009829125, jenkins-hbase4.apache.org,34133,1690009825283] are moved back to default 2023-07-22 07:10:43,323 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminServer(438): Move servers done: default => oldGroup 2023-07-22 07:10:43,323 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-22 07:10:43,325 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-22 07:10:43,326 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-22 07:10:43,335 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=oldGroup 2023-07-22 07:10:43,335 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-22 07:10:43,336 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=oldGroup 2023-07-22 07:10:43,336 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-22 07:10:43,338 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-22 07:10:43,338 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-22 07:10:43,339 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup anotherRSGroup 2023-07-22 07:10:43,342 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-22 07:10:43,342 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/anotherRSGroup 2023-07-22 07:10:43,347 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldGroup 2023-07-22 07:10:43,348 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-22 07:10:43,348 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-22 07:10:43,351 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-22 07:10:43,356 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-22 07:10:43,356 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-22 07:10:43,360 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:39057] to rsgroup anotherRSGroup 2023-07-22 07:10:43,362 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-22 07:10:43,362 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/anotherRSGroup 2023-07-22 07:10:43,362 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldGroup 2023-07-22 07:10:43,363 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-22 07:10:43,363 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-22 07:10:43,368 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group default, current retry=0 2023-07-22 07:10:43,368 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,39057,1690009825637] are moved back to default 2023-07-22 07:10:43,368 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminServer(438): Move servers done: default => anotherRSGroup 2023-07-22 07:10:43,368 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-22 07:10:43,371 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-22 07:10:43,371 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-22 07:10:43,374 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=anotherRSGroup 2023-07-22 07:10:43,374 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-22 07:10:43,374 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=anotherRSGroup 2023-07-22 07:10:43,374 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-22 07:10:43,382 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(408): Client=jenkins//172.31.14.131 rename rsgroup from nonExistingRSGroup to newRSGroup1 2023-07-22 07:10:43,382 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup nonExistingRSGroup does not exist at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.renameRSGroup(RSGroupInfoManagerImpl.java:407) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.renameRSGroup(RSGroupAdminServer.java:617) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.renameRSGroup(RSGroupAdminEndpoint.java:417) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16233) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-22 07:10:43,382 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] ipc.CallRunner(144): callId: 574 service: MasterService methodName: ExecMasterService size: 113 connection: 172.31.14.131:38908 deadline: 1690011043381, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup nonExistingRSGroup does not exist 2023-07-22 07:10:43,384 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(408): Client=jenkins//172.31.14.131 rename rsgroup from oldGroup to anotherRSGroup 2023-07-22 07:10:43,385 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Group already exists: anotherRSGroup at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.renameRSGroup(RSGroupInfoManagerImpl.java:410) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.renameRSGroup(RSGroupAdminServer.java:617) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.renameRSGroup(RSGroupAdminEndpoint.java:417) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16233) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-22 07:10:43,385 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] ipc.CallRunner(144): callId: 576 service: MasterService methodName: ExecMasterService size: 106 connection: 172.31.14.131:38908 deadline: 1690011043384, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Group already exists: anotherRSGroup 2023-07-22 07:10:43,387 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(408): Client=jenkins//172.31.14.131 rename rsgroup from default to newRSGroup2 2023-07-22 07:10:43,387 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Can't rename default rsgroup at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.renameRSGroup(RSGroupInfoManagerImpl.java:403) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.renameRSGroup(RSGroupAdminServer.java:617) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.renameRSGroup(RSGroupAdminEndpoint.java:417) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16233) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-22 07:10:43,387 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] ipc.CallRunner(144): callId: 578 service: MasterService methodName: ExecMasterService size: 102 connection: 172.31.14.131:38908 deadline: 1690011043387, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Can't rename default rsgroup 2023-07-22 07:10:43,388 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(408): Client=jenkins//172.31.14.131 rename rsgroup from oldGroup to default 2023-07-22 07:10:43,388 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Group already exists: default at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.renameRSGroup(RSGroupInfoManagerImpl.java:410) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.renameRSGroup(RSGroupAdminServer.java:617) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.renameRSGroup(RSGroupAdminEndpoint.java:417) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16233) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-22 07:10:43,388 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] ipc.CallRunner(144): callId: 580 service: MasterService methodName: ExecMasterService size: 99 connection: 172.31.14.131:38908 deadline: 1690011043388, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Group already exists: default 2023-07-22 07:10:43,392 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-22 07:10:43,392 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-22 07:10:43,393 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-22 07:10:43,393 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-22 07:10:43,393 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-22 07:10:43,394 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:39057] to rsgroup default 2023-07-22 07:10:43,400 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-22 07:10:43,400 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/anotherRSGroup 2023-07-22 07:10:43,401 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldGroup 2023-07-22 07:10:43,401 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-22 07:10:43,401 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-22 07:10:43,403 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group anotherRSGroup, current retry=0 2023-07-22 07:10:43,403 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,39057,1690009825637] are moved back to anotherRSGroup 2023-07-22 07:10:43,403 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminServer(438): Move servers done: anotherRSGroup => default 2023-07-22 07:10:43,403 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-22 07:10:43,404 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup anotherRSGroup 2023-07-22 07:10:43,408 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-22 07:10:43,409 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldGroup 2023-07-22 07:10:43,409 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-22 07:10:43,409 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 7 2023-07-22 07:10:43,414 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-22 07:10:43,415 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-22 07:10:43,415 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-22 07:10:43,415 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-22 07:10:43,416 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:34133, jenkins-hbase4.apache.org:33357] to rsgroup default 2023-07-22 07:10:43,423 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-22 07:10:43,423 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldGroup 2023-07-22 07:10:43,423 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-22 07:10:43,424 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-22 07:10:43,425 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group oldGroup, current retry=0 2023-07-22 07:10:43,425 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,33357,1690009829125, jenkins-hbase4.apache.org,34133,1690009825283] are moved back to oldGroup 2023-07-22 07:10:43,425 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminServer(438): Move servers done: oldGroup => default 2023-07-22 07:10:43,425 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-22 07:10:43,426 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup oldGroup 2023-07-22 07:10:43,429 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-22 07:10:43,430 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-22 07:10:43,430 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-22 07:10:43,431 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-22 07:10:43,432 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-22 07:10:43,432 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-22 07:10:43,432 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-22 07:10:43,433 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-22 07:10:43,433 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-22 07:10:43,434 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-22 07:10:43,436 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-22 07:10:43,437 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-22 07:10:43,439 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-22 07:10:43,441 INFO [Listener at localhost/46507] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-22 07:10:43,442 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-22 07:10:43,445 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-22 07:10:43,446 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-22 07:10:43,447 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-22 07:10:43,449 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-22 07:10:43,451 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-22 07:10:43,451 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-22 07:10:43,454 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:37061] to rsgroup master 2023-07-22 07:10:43,454 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37061 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-22 07:10:43,454 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] ipc.CallRunner(144): callId: 616 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:38908 deadline: 1690011043453, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37061 is either offline or it does not exist. 2023-07-22 07:10:43,454 WARN [Listener at localhost/46507] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37061 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37061 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-22 07:10:43,456 INFO [Listener at localhost/46507] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-22 07:10:43,457 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-22 07:10:43,457 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-22 07:10:43,457 INFO [Listener at localhost/46507] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:33357, jenkins-hbase4.apache.org:34133, jenkins-hbase4.apache.org:39057, jenkins-hbase4.apache.org:41787], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-22 07:10:43,458 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-22 07:10:43,458 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-22 07:10:43,477 INFO [Listener at localhost/46507] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testRenameRSGroupConstraints Thread=512 (was 509) Potentially hanging thread: hconnection-0x2e79eb29-shared-pool-17 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x2e79eb29-shared-pool-16 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x2e79eb29-shared-pool-18 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x2e79eb29-shared-pool-15 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=789 (was 789), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=404 (was 404), ProcessCount=180 (was 180), AvailableMemoryMB=6775 (was 6781) 2023-07-22 07:10:43,478 WARN [Listener at localhost/46507] hbase.ResourceChecker(130): Thread=512 is superior to 500 2023-07-22 07:10:43,498 INFO [Listener at localhost/46507] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testRenameRSGroup Thread=511, OpenFileDescriptor=789, MaxFileDescriptor=60000, SystemLoadAverage=404, ProcessCount=180, AvailableMemoryMB=6772 2023-07-22 07:10:43,499 WARN [Listener at localhost/46507] hbase.ResourceChecker(130): Thread=511 is superior to 500 2023-07-22 07:10:43,499 INFO [Listener at localhost/46507] rsgroup.TestRSGroupsBase(132): testRenameRSGroup 2023-07-22 07:10:43,503 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-22 07:10:43,504 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-22 07:10:43,505 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-22 07:10:43,505 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-22 07:10:43,505 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-22 07:10:43,506 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-22 07:10:43,506 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-22 07:10:43,506 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-22 07:10:43,510 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-22 07:10:43,510 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-22 07:10:43,513 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-22 07:10:43,516 INFO [Listener at localhost/46507] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-22 07:10:43,517 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-22 07:10:43,519 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-22 07:10:43,519 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-22 07:10:43,521 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-22 07:10:43,523 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-22 07:10:43,526 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-22 07:10:43,526 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-22 07:10:43,528 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:37061] to rsgroup master 2023-07-22 07:10:43,528 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37061 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-22 07:10:43,528 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] ipc.CallRunner(144): callId: 644 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:38908 deadline: 1690011043528, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37061 is either offline or it does not exist. 2023-07-22 07:10:43,529 WARN [Listener at localhost/46507] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37061 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37061 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-22 07:10:43,530 INFO [Listener at localhost/46507] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-22 07:10:43,531 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-22 07:10:43,531 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-22 07:10:43,532 INFO [Listener at localhost/46507] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:33357, jenkins-hbase4.apache.org:34133, jenkins-hbase4.apache.org:39057, jenkins-hbase4.apache.org:41787], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-22 07:10:43,532 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-22 07:10:43,532 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-22 07:10:43,533 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-22 07:10:43,533 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-22 07:10:43,534 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup oldgroup 2023-07-22 07:10:43,536 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-22 07:10:43,546 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-22 07:10:43,546 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-22 07:10:43,547 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-22 07:10:43,555 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-22 07:10:43,559 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-22 07:10:43,560 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-22 07:10:43,563 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:34133, jenkins-hbase4.apache.org:33357] to rsgroup oldgroup 2023-07-22 07:10:43,566 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-22 07:10:43,566 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-22 07:10:43,567 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-22 07:10:43,568 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-22 07:10:43,570 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group default, current retry=0 2023-07-22 07:10:43,570 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,33357,1690009829125, jenkins-hbase4.apache.org,34133,1690009825283] are moved back to default 2023-07-22 07:10:43,570 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminServer(438): Move servers done: default => oldgroup 2023-07-22 07:10:43,570 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-22 07:10:43,573 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-22 07:10:43,573 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-22 07:10:43,575 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=oldgroup 2023-07-22 07:10:43,575 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-22 07:10:43,576 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'testRename', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'tr', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-22 07:10:43,577 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] procedure2.ProcedureExecutor(1029): Stored pid=108, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=testRename 2023-07-22 07:10:43,580 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=108, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=testRename execute state=CREATE_TABLE_PRE_OPERATION 2023-07-22 07:10:43,580 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "testRename" procId is: 108 2023-07-22 07:10:43,582 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.MasterRpcServices(1230): Checking to see if procedure is done pid=108 2023-07-22 07:10:43,582 DEBUG [PEWorker-1] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-22 07:10:43,583 DEBUG [PEWorker-1] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-22 07:10:43,583 DEBUG [PEWorker-1] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-22 07:10:43,583 DEBUG [PEWorker-1] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-22 07:10:43,585 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=108, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=testRename execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-22 07:10:43,587 DEBUG [HFileArchiver-7] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/.tmp/data/default/testRename/d1da522f8583716887d9f5a3f6b36be8 2023-07-22 07:10:43,588 DEBUG [HFileArchiver-7] backup.HFileArchiver(153): Directory hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/.tmp/data/default/testRename/d1da522f8583716887d9f5a3f6b36be8 empty. 2023-07-22 07:10:43,588 DEBUG [HFileArchiver-7] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/.tmp/data/default/testRename/d1da522f8583716887d9f5a3f6b36be8 2023-07-22 07:10:43,588 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(328): Archived testRename regions 2023-07-22 07:10:43,625 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/.tmp/data/default/testRename/.tabledesc/.tableinfo.0000000001 2023-07-22 07:10:43,626 INFO [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(7675): creating {ENCODED => d1da522f8583716887d9f5a3f6b36be8, NAME => 'testRename,,1690009843576.d1da522f8583716887d9f5a3f6b36be8.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='testRename', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'tr', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/.tmp 2023-07-22 07:10:43,664 DEBUG [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(866): Instantiated testRename,,1690009843576.d1da522f8583716887d9f5a3f6b36be8.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-22 07:10:43,664 DEBUG [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(1604): Closing d1da522f8583716887d9f5a3f6b36be8, disabling compactions & flushes 2023-07-22 07:10:43,664 INFO [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(1626): Closing region testRename,,1690009843576.d1da522f8583716887d9f5a3f6b36be8. 2023-07-22 07:10:43,664 DEBUG [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on testRename,,1690009843576.d1da522f8583716887d9f5a3f6b36be8. 2023-07-22 07:10:43,664 DEBUG [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(1714): Acquired close lock on testRename,,1690009843576.d1da522f8583716887d9f5a3f6b36be8. after waiting 0 ms 2023-07-22 07:10:43,664 DEBUG [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(1724): Updates disabled for region testRename,,1690009843576.d1da522f8583716887d9f5a3f6b36be8. 2023-07-22 07:10:43,664 INFO [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(1838): Closed testRename,,1690009843576.d1da522f8583716887d9f5a3f6b36be8. 2023-07-22 07:10:43,664 DEBUG [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(1558): Region close journal for d1da522f8583716887d9f5a3f6b36be8: 2023-07-22 07:10:43,668 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=108, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=testRename execute state=CREATE_TABLE_ADD_TO_META 2023-07-22 07:10:43,669 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"testRename,,1690009843576.d1da522f8583716887d9f5a3f6b36be8.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1690009843669"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690009843669"}]},"ts":"1690009843669"} 2023-07-22 07:10:43,670 INFO [PEWorker-1] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-22 07:10:43,671 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=108, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=testRename execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-22 07:10:43,671 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"testRename","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690009843671"}]},"ts":"1690009843671"} 2023-07-22 07:10:43,673 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=testRename, state=ENABLING in hbase:meta 2023-07-22 07:10:43,677 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-22 07:10:43,677 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-22 07:10:43,677 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-22 07:10:43,677 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-22 07:10:43,680 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=109, ppid=108, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=testRename, region=d1da522f8583716887d9f5a3f6b36be8, ASSIGN}] 2023-07-22 07:10:43,683 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=109, ppid=108, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=testRename, region=d1da522f8583716887d9f5a3f6b36be8, ASSIGN 2023-07-22 07:10:43,684 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.MasterRpcServices(1230): Checking to see if procedure is done pid=108 2023-07-22 07:10:43,685 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=109, ppid=108, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=testRename, region=d1da522f8583716887d9f5a3f6b36be8, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,39057,1690009825637; forceNewPlan=false, retain=false 2023-07-22 07:10:43,818 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-hbase.properties,hadoop-metrics2.properties 2023-07-22 07:10:43,835 INFO [jenkins-hbase4:37061] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-22 07:10:43,837 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=109 updating hbase:meta row=d1da522f8583716887d9f5a3f6b36be8, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,39057,1690009825637 2023-07-22 07:10:43,837 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"testRename,,1690009843576.d1da522f8583716887d9f5a3f6b36be8.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1690009843837"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690009843837"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690009843837"}]},"ts":"1690009843837"} 2023-07-22 07:10:43,843 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=110, ppid=109, state=RUNNABLE; OpenRegionProcedure d1da522f8583716887d9f5a3f6b36be8, server=jenkins-hbase4.apache.org,39057,1690009825637}] 2023-07-22 07:10:43,885 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.MasterRpcServices(1230): Checking to see if procedure is done pid=108 2023-07-22 07:10:44,003 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open testRename,,1690009843576.d1da522f8583716887d9f5a3f6b36be8. 2023-07-22 07:10:44,003 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => d1da522f8583716887d9f5a3f6b36be8, NAME => 'testRename,,1690009843576.d1da522f8583716887d9f5a3f6b36be8.', STARTKEY => '', ENDKEY => ''} 2023-07-22 07:10:44,004 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table testRename d1da522f8583716887d9f5a3f6b36be8 2023-07-22 07:10:44,004 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated testRename,,1690009843576.d1da522f8583716887d9f5a3f6b36be8.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-22 07:10:44,004 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for d1da522f8583716887d9f5a3f6b36be8 2023-07-22 07:10:44,004 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for d1da522f8583716887d9f5a3f6b36be8 2023-07-22 07:10:44,005 INFO [StoreOpener-d1da522f8583716887d9f5a3f6b36be8-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family tr of region d1da522f8583716887d9f5a3f6b36be8 2023-07-22 07:10:44,007 DEBUG [StoreOpener-d1da522f8583716887d9f5a3f6b36be8-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/data/default/testRename/d1da522f8583716887d9f5a3f6b36be8/tr 2023-07-22 07:10:44,007 DEBUG [StoreOpener-d1da522f8583716887d9f5a3f6b36be8-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/data/default/testRename/d1da522f8583716887d9f5a3f6b36be8/tr 2023-07-22 07:10:44,008 INFO [StoreOpener-d1da522f8583716887d9f5a3f6b36be8-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region d1da522f8583716887d9f5a3f6b36be8 columnFamilyName tr 2023-07-22 07:10:44,008 INFO [StoreOpener-d1da522f8583716887d9f5a3f6b36be8-1] regionserver.HStore(310): Store=d1da522f8583716887d9f5a3f6b36be8/tr, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-22 07:10:44,009 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/data/default/testRename/d1da522f8583716887d9f5a3f6b36be8 2023-07-22 07:10:44,009 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/data/default/testRename/d1da522f8583716887d9f5a3f6b36be8 2023-07-22 07:10:44,012 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for d1da522f8583716887d9f5a3f6b36be8 2023-07-22 07:10:44,014 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/data/default/testRename/d1da522f8583716887d9f5a3f6b36be8/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-22 07:10:44,015 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened d1da522f8583716887d9f5a3f6b36be8; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11815643840, jitterRate=0.10041758418083191}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-22 07:10:44,015 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for d1da522f8583716887d9f5a3f6b36be8: 2023-07-22 07:10:44,015 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for testRename,,1690009843576.d1da522f8583716887d9f5a3f6b36be8., pid=110, masterSystemTime=1690009843998 2023-07-22 07:10:44,017 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for testRename,,1690009843576.d1da522f8583716887d9f5a3f6b36be8. 2023-07-22 07:10:44,017 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened testRename,,1690009843576.d1da522f8583716887d9f5a3f6b36be8. 2023-07-22 07:10:44,018 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=109 updating hbase:meta row=d1da522f8583716887d9f5a3f6b36be8, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,39057,1690009825637 2023-07-22 07:10:44,018 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"testRename,,1690009843576.d1da522f8583716887d9f5a3f6b36be8.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1690009844017"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690009844017"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690009844017"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690009844017"}]},"ts":"1690009844017"} 2023-07-22 07:10:44,031 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=110, resume processing ppid=109 2023-07-22 07:10:44,031 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=110, ppid=109, state=SUCCESS; OpenRegionProcedure d1da522f8583716887d9f5a3f6b36be8, server=jenkins-hbase4.apache.org,39057,1690009825637 in 176 msec 2023-07-22 07:10:44,033 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=109, resume processing ppid=108 2023-07-22 07:10:44,033 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=109, ppid=108, state=SUCCESS; TransitRegionStateProcedure table=testRename, region=d1da522f8583716887d9f5a3f6b36be8, ASSIGN in 354 msec 2023-07-22 07:10:44,034 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=108, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=testRename execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-22 07:10:44,034 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"testRename","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690009844034"}]},"ts":"1690009844034"} 2023-07-22 07:10:44,036 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=testRename, state=ENABLED in hbase:meta 2023-07-22 07:10:44,039 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=108, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=testRename execute state=CREATE_TABLE_POST_OPERATION 2023-07-22 07:10:44,041 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=108, state=SUCCESS; CreateTableProcedure table=testRename in 464 msec 2023-07-22 07:10:44,186 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.MasterRpcServices(1230): Checking to see if procedure is done pid=108 2023-07-22 07:10:44,186 INFO [Listener at localhost/46507] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:testRename, procId: 108 completed 2023-07-22 07:10:44,186 DEBUG [Listener at localhost/46507] hbase.HBaseTestingUtility(3430): Waiting until all regions of table testRename get assigned. Timeout = 60000ms 2023-07-22 07:10:44,187 INFO [Listener at localhost/46507] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-22 07:10:44,190 INFO [Listener at localhost/46507] hbase.HBaseTestingUtility(3484): All regions for table testRename assigned to meta. Checking AM states. 2023-07-22 07:10:44,190 INFO [Listener at localhost/46507] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-22 07:10:44,190 INFO [Listener at localhost/46507] hbase.HBaseTestingUtility(3504): All regions for table testRename assigned. 2023-07-22 07:10:44,192 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [testRename] to rsgroup oldgroup 2023-07-22 07:10:44,194 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-22 07:10:44,194 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-22 07:10:44,195 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-22 07:10:44,195 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-22 07:10:44,196 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminServer(339): Moving region(s) for table testRename to RSGroup oldgroup 2023-07-22 07:10:44,196 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminServer(345): Moving region d1da522f8583716887d9f5a3f6b36be8 to RSGroup oldgroup 2023-07-22 07:10:44,197 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-22 07:10:44,197 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-22 07:10:44,197 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-22 07:10:44,197 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-22 07:10:44,197 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-22 07:10:44,198 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] procedure2.ProcedureExecutor(1029): Stored pid=111, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=testRename, region=d1da522f8583716887d9f5a3f6b36be8, REOPEN/MOVE 2023-07-22 07:10:44,198 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminServer(286): Moving 1 region(s) to group oldgroup, current retry=0 2023-07-22 07:10:44,198 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=111, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=testRename, region=d1da522f8583716887d9f5a3f6b36be8, REOPEN/MOVE 2023-07-22 07:10:44,198 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=111 updating hbase:meta row=d1da522f8583716887d9f5a3f6b36be8, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,39057,1690009825637 2023-07-22 07:10:44,198 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"testRename,,1690009843576.d1da522f8583716887d9f5a3f6b36be8.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1690009844198"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690009844198"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690009844198"}]},"ts":"1690009844198"} 2023-07-22 07:10:44,200 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=112, ppid=111, state=RUNNABLE; CloseRegionProcedure d1da522f8583716887d9f5a3f6b36be8, server=jenkins-hbase4.apache.org,39057,1690009825637}] 2023-07-22 07:10:44,353 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close d1da522f8583716887d9f5a3f6b36be8 2023-07-22 07:10:44,355 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing d1da522f8583716887d9f5a3f6b36be8, disabling compactions & flushes 2023-07-22 07:10:44,355 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region testRename,,1690009843576.d1da522f8583716887d9f5a3f6b36be8. 2023-07-22 07:10:44,355 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on testRename,,1690009843576.d1da522f8583716887d9f5a3f6b36be8. 2023-07-22 07:10:44,355 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on testRename,,1690009843576.d1da522f8583716887d9f5a3f6b36be8. after waiting 0 ms 2023-07-22 07:10:44,355 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region testRename,,1690009843576.d1da522f8583716887d9f5a3f6b36be8. 2023-07-22 07:10:44,368 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/data/default/testRename/d1da522f8583716887d9f5a3f6b36be8/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-22 07:10:44,369 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed testRename,,1690009843576.d1da522f8583716887d9f5a3f6b36be8. 2023-07-22 07:10:44,369 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for d1da522f8583716887d9f5a3f6b36be8: 2023-07-22 07:10:44,369 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding d1da522f8583716887d9f5a3f6b36be8 move to jenkins-hbase4.apache.org,33357,1690009829125 record at close sequenceid=2 2023-07-22 07:10:44,371 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed d1da522f8583716887d9f5a3f6b36be8 2023-07-22 07:10:44,372 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=111 updating hbase:meta row=d1da522f8583716887d9f5a3f6b36be8, regionState=CLOSED 2023-07-22 07:10:44,372 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"testRename,,1690009843576.d1da522f8583716887d9f5a3f6b36be8.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1690009844371"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690009844371"}]},"ts":"1690009844371"} 2023-07-22 07:10:44,375 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=112, resume processing ppid=111 2023-07-22 07:10:44,375 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=112, ppid=111, state=SUCCESS; CloseRegionProcedure d1da522f8583716887d9f5a3f6b36be8, server=jenkins-hbase4.apache.org,39057,1690009825637 in 173 msec 2023-07-22 07:10:44,375 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=111, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=testRename, region=d1da522f8583716887d9f5a3f6b36be8, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,33357,1690009829125; forceNewPlan=false, retain=false 2023-07-22 07:10:44,526 INFO [jenkins-hbase4:37061] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-22 07:10:44,526 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=111 updating hbase:meta row=d1da522f8583716887d9f5a3f6b36be8, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,33357,1690009829125 2023-07-22 07:10:44,526 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"testRename,,1690009843576.d1da522f8583716887d9f5a3f6b36be8.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1690009844526"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690009844526"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690009844526"}]},"ts":"1690009844526"} 2023-07-22 07:10:44,528 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=113, ppid=111, state=RUNNABLE; OpenRegionProcedure d1da522f8583716887d9f5a3f6b36be8, server=jenkins-hbase4.apache.org,33357,1690009829125}] 2023-07-22 07:10:44,685 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open testRename,,1690009843576.d1da522f8583716887d9f5a3f6b36be8. 2023-07-22 07:10:44,685 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => d1da522f8583716887d9f5a3f6b36be8, NAME => 'testRename,,1690009843576.d1da522f8583716887d9f5a3f6b36be8.', STARTKEY => '', ENDKEY => ''} 2023-07-22 07:10:44,685 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table testRename d1da522f8583716887d9f5a3f6b36be8 2023-07-22 07:10:44,685 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated testRename,,1690009843576.d1da522f8583716887d9f5a3f6b36be8.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-22 07:10:44,685 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for d1da522f8583716887d9f5a3f6b36be8 2023-07-22 07:10:44,685 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for d1da522f8583716887d9f5a3f6b36be8 2023-07-22 07:10:44,687 INFO [StoreOpener-d1da522f8583716887d9f5a3f6b36be8-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family tr of region d1da522f8583716887d9f5a3f6b36be8 2023-07-22 07:10:44,688 DEBUG [StoreOpener-d1da522f8583716887d9f5a3f6b36be8-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/data/default/testRename/d1da522f8583716887d9f5a3f6b36be8/tr 2023-07-22 07:10:44,688 DEBUG [StoreOpener-d1da522f8583716887d9f5a3f6b36be8-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/data/default/testRename/d1da522f8583716887d9f5a3f6b36be8/tr 2023-07-22 07:10:44,688 INFO [StoreOpener-d1da522f8583716887d9f5a3f6b36be8-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region d1da522f8583716887d9f5a3f6b36be8 columnFamilyName tr 2023-07-22 07:10:44,689 INFO [StoreOpener-d1da522f8583716887d9f5a3f6b36be8-1] regionserver.HStore(310): Store=d1da522f8583716887d9f5a3f6b36be8/tr, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-22 07:10:44,690 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/data/default/testRename/d1da522f8583716887d9f5a3f6b36be8 2023-07-22 07:10:44,691 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/data/default/testRename/d1da522f8583716887d9f5a3f6b36be8 2023-07-22 07:10:44,694 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for d1da522f8583716887d9f5a3f6b36be8 2023-07-22 07:10:44,695 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened d1da522f8583716887d9f5a3f6b36be8; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9974542240, jitterRate=-0.07104836404323578}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-22 07:10:44,695 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for d1da522f8583716887d9f5a3f6b36be8: 2023-07-22 07:10:44,698 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for testRename,,1690009843576.d1da522f8583716887d9f5a3f6b36be8., pid=113, masterSystemTime=1690009844681 2023-07-22 07:10:44,700 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for testRename,,1690009843576.d1da522f8583716887d9f5a3f6b36be8. 2023-07-22 07:10:44,700 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened testRename,,1690009843576.d1da522f8583716887d9f5a3f6b36be8. 2023-07-22 07:10:44,700 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=111 updating hbase:meta row=d1da522f8583716887d9f5a3f6b36be8, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,33357,1690009829125 2023-07-22 07:10:44,701 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"testRename,,1690009843576.d1da522f8583716887d9f5a3f6b36be8.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1690009844700"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690009844700"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690009844700"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690009844700"}]},"ts":"1690009844700"} 2023-07-22 07:10:44,704 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=113, resume processing ppid=111 2023-07-22 07:10:44,704 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=113, ppid=111, state=SUCCESS; OpenRegionProcedure d1da522f8583716887d9f5a3f6b36be8, server=jenkins-hbase4.apache.org,33357,1690009829125 in 174 msec 2023-07-22 07:10:44,705 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=111, state=SUCCESS; TransitRegionStateProcedure table=testRename, region=d1da522f8583716887d9f5a3f6b36be8, REOPEN/MOVE in 507 msec 2023-07-22 07:10:45,198 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] procedure.ProcedureSyncWait(216): waitFor pid=111 2023-07-22 07:10:45,198 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminServer(369): All regions from table(s) [testRename] moved to target group oldgroup. 2023-07-22 07:10:45,198 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-22 07:10:45,201 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-22 07:10:45,201 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-22 07:10:45,203 INFO [Listener at localhost/46507] hbase.Waiter(180): Waiting up to [1,000] milli-secs(wait.for.ratio=[1]) 2023-07-22 07:10:45,204 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=testRename 2023-07-22 07:10:45,204 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-22 07:10:45,205 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=oldgroup 2023-07-22 07:10:45,205 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-22 07:10:45,206 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=testRename 2023-07-22 07:10:45,206 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-22 07:10:45,207 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-22 07:10:45,207 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-22 07:10:45,208 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup normal 2023-07-22 07:10:45,210 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-22 07:10:45,211 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/normal 2023-07-22 07:10:45,212 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-22 07:10:45,213 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-22 07:10:45,213 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-22 07:10:45,215 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-22 07:10:45,218 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-22 07:10:45,218 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-22 07:10:45,221 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:39057] to rsgroup normal 2023-07-22 07:10:45,223 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-22 07:10:45,224 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/normal 2023-07-22 07:10:45,224 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-22 07:10:45,224 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-22 07:10:45,225 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-22 07:10:45,229 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group default, current retry=0 2023-07-22 07:10:45,229 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,39057,1690009825637] are moved back to default 2023-07-22 07:10:45,229 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminServer(438): Move servers done: default => normal 2023-07-22 07:10:45,229 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-22 07:10:45,231 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-22 07:10:45,231 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-22 07:10:45,234 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=normal 2023-07-22 07:10:45,234 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-22 07:10:45,236 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'unmovedTable', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'ut', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-22 07:10:45,237 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] procedure2.ProcedureExecutor(1029): Stored pid=114, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=unmovedTable 2023-07-22 07:10:45,239 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=114, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=unmovedTable execute state=CREATE_TABLE_PRE_OPERATION 2023-07-22 07:10:45,239 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "unmovedTable" procId is: 114 2023-07-22 07:10:45,239 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.MasterRpcServices(1230): Checking to see if procedure is done pid=114 2023-07-22 07:10:45,240 DEBUG [PEWorker-1] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-22 07:10:45,241 DEBUG [PEWorker-1] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/normal 2023-07-22 07:10:45,241 DEBUG [PEWorker-1] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-22 07:10:45,242 DEBUG [PEWorker-1] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-22 07:10:45,242 DEBUG [PEWorker-1] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-22 07:10:45,244 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=114, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=unmovedTable execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-22 07:10:45,246 DEBUG [HFileArchiver-8] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/.tmp/data/default/unmovedTable/b6b55dee15272a98eb856a00e0a41f50 2023-07-22 07:10:45,246 DEBUG [HFileArchiver-8] backup.HFileArchiver(153): Directory hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/.tmp/data/default/unmovedTable/b6b55dee15272a98eb856a00e0a41f50 empty. 2023-07-22 07:10:45,247 DEBUG [HFileArchiver-8] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/.tmp/data/default/unmovedTable/b6b55dee15272a98eb856a00e0a41f50 2023-07-22 07:10:45,247 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(328): Archived unmovedTable regions 2023-07-22 07:10:45,280 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/.tmp/data/default/unmovedTable/.tabledesc/.tableinfo.0000000001 2023-07-22 07:10:45,281 INFO [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(7675): creating {ENCODED => b6b55dee15272a98eb856a00e0a41f50, NAME => 'unmovedTable,,1690009845235.b6b55dee15272a98eb856a00e0a41f50.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='unmovedTable', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'ut', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/.tmp 2023-07-22 07:10:45,299 DEBUG [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(866): Instantiated unmovedTable,,1690009845235.b6b55dee15272a98eb856a00e0a41f50.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-22 07:10:45,300 DEBUG [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(1604): Closing b6b55dee15272a98eb856a00e0a41f50, disabling compactions & flushes 2023-07-22 07:10:45,300 INFO [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(1626): Closing region unmovedTable,,1690009845235.b6b55dee15272a98eb856a00e0a41f50. 2023-07-22 07:10:45,300 DEBUG [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on unmovedTable,,1690009845235.b6b55dee15272a98eb856a00e0a41f50. 2023-07-22 07:10:45,300 DEBUG [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(1714): Acquired close lock on unmovedTable,,1690009845235.b6b55dee15272a98eb856a00e0a41f50. after waiting 0 ms 2023-07-22 07:10:45,300 DEBUG [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(1724): Updates disabled for region unmovedTable,,1690009845235.b6b55dee15272a98eb856a00e0a41f50. 2023-07-22 07:10:45,300 INFO [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(1838): Closed unmovedTable,,1690009845235.b6b55dee15272a98eb856a00e0a41f50. 2023-07-22 07:10:45,300 DEBUG [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(1558): Region close journal for b6b55dee15272a98eb856a00e0a41f50: 2023-07-22 07:10:45,303 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=114, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=unmovedTable execute state=CREATE_TABLE_ADD_TO_META 2023-07-22 07:10:45,304 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"unmovedTable,,1690009845235.b6b55dee15272a98eb856a00e0a41f50.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1690009845304"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690009845304"}]},"ts":"1690009845304"} 2023-07-22 07:10:45,305 INFO [PEWorker-1] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-22 07:10:45,306 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=114, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=unmovedTable execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-22 07:10:45,306 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"unmovedTable","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690009845306"}]},"ts":"1690009845306"} 2023-07-22 07:10:45,307 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=unmovedTable, state=ENABLING in hbase:meta 2023-07-22 07:10:45,312 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=115, ppid=114, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=unmovedTable, region=b6b55dee15272a98eb856a00e0a41f50, ASSIGN}] 2023-07-22 07:10:45,314 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=115, ppid=114, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=unmovedTable, region=b6b55dee15272a98eb856a00e0a41f50, ASSIGN 2023-07-22 07:10:45,314 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=115, ppid=114, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=unmovedTable, region=b6b55dee15272a98eb856a00e0a41f50, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,41787,1690009825478; forceNewPlan=false, retain=false 2023-07-22 07:10:45,341 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.MasterRpcServices(1230): Checking to see if procedure is done pid=114 2023-07-22 07:10:45,466 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=115 updating hbase:meta row=b6b55dee15272a98eb856a00e0a41f50, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,41787,1690009825478 2023-07-22 07:10:45,466 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"unmovedTable,,1690009845235.b6b55dee15272a98eb856a00e0a41f50.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1690009845466"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690009845466"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690009845466"}]},"ts":"1690009845466"} 2023-07-22 07:10:45,469 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=116, ppid=115, state=RUNNABLE; OpenRegionProcedure b6b55dee15272a98eb856a00e0a41f50, server=jenkins-hbase4.apache.org,41787,1690009825478}] 2023-07-22 07:10:45,542 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.MasterRpcServices(1230): Checking to see if procedure is done pid=114 2023-07-22 07:10:45,630 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open unmovedTable,,1690009845235.b6b55dee15272a98eb856a00e0a41f50. 2023-07-22 07:10:45,630 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => b6b55dee15272a98eb856a00e0a41f50, NAME => 'unmovedTable,,1690009845235.b6b55dee15272a98eb856a00e0a41f50.', STARTKEY => '', ENDKEY => ''} 2023-07-22 07:10:45,631 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table unmovedTable b6b55dee15272a98eb856a00e0a41f50 2023-07-22 07:10:45,631 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated unmovedTable,,1690009845235.b6b55dee15272a98eb856a00e0a41f50.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-22 07:10:45,631 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for b6b55dee15272a98eb856a00e0a41f50 2023-07-22 07:10:45,631 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for b6b55dee15272a98eb856a00e0a41f50 2023-07-22 07:10:45,633 INFO [StoreOpener-b6b55dee15272a98eb856a00e0a41f50-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family ut of region b6b55dee15272a98eb856a00e0a41f50 2023-07-22 07:10:45,634 DEBUG [StoreOpener-b6b55dee15272a98eb856a00e0a41f50-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/data/default/unmovedTable/b6b55dee15272a98eb856a00e0a41f50/ut 2023-07-22 07:10:45,635 DEBUG [StoreOpener-b6b55dee15272a98eb856a00e0a41f50-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/data/default/unmovedTable/b6b55dee15272a98eb856a00e0a41f50/ut 2023-07-22 07:10:45,635 INFO [StoreOpener-b6b55dee15272a98eb856a00e0a41f50-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region b6b55dee15272a98eb856a00e0a41f50 columnFamilyName ut 2023-07-22 07:10:45,636 INFO [StoreOpener-b6b55dee15272a98eb856a00e0a41f50-1] regionserver.HStore(310): Store=b6b55dee15272a98eb856a00e0a41f50/ut, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-22 07:10:45,636 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/data/default/unmovedTable/b6b55dee15272a98eb856a00e0a41f50 2023-07-22 07:10:45,637 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/data/default/unmovedTable/b6b55dee15272a98eb856a00e0a41f50 2023-07-22 07:10:45,640 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for b6b55dee15272a98eb856a00e0a41f50 2023-07-22 07:10:45,647 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/data/default/unmovedTable/b6b55dee15272a98eb856a00e0a41f50/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-22 07:10:45,648 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened b6b55dee15272a98eb856a00e0a41f50; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11709984480, jitterRate=0.09057728946208954}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-22 07:10:45,648 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for b6b55dee15272a98eb856a00e0a41f50: 2023-07-22 07:10:45,649 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for unmovedTable,,1690009845235.b6b55dee15272a98eb856a00e0a41f50., pid=116, masterSystemTime=1690009845626 2023-07-22 07:10:45,650 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for unmovedTable,,1690009845235.b6b55dee15272a98eb856a00e0a41f50. 2023-07-22 07:10:45,652 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened unmovedTable,,1690009845235.b6b55dee15272a98eb856a00e0a41f50. 2023-07-22 07:10:45,652 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=115 updating hbase:meta row=b6b55dee15272a98eb856a00e0a41f50, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,41787,1690009825478 2023-07-22 07:10:45,652 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"unmovedTable,,1690009845235.b6b55dee15272a98eb856a00e0a41f50.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1690009845652"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690009845652"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690009845652"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690009845652"}]},"ts":"1690009845652"} 2023-07-22 07:10:45,656 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=116, resume processing ppid=115 2023-07-22 07:10:45,656 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=116, ppid=115, state=SUCCESS; OpenRegionProcedure b6b55dee15272a98eb856a00e0a41f50, server=jenkins-hbase4.apache.org,41787,1690009825478 in 186 msec 2023-07-22 07:10:45,658 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=115, resume processing ppid=114 2023-07-22 07:10:45,658 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=115, ppid=114, state=SUCCESS; TransitRegionStateProcedure table=unmovedTable, region=b6b55dee15272a98eb856a00e0a41f50, ASSIGN in 344 msec 2023-07-22 07:10:45,659 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=114, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=unmovedTable execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-22 07:10:45,659 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"unmovedTable","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690009845659"}]},"ts":"1690009845659"} 2023-07-22 07:10:45,660 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=unmovedTable, state=ENABLED in hbase:meta 2023-07-22 07:10:45,663 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=114, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=unmovedTable execute state=CREATE_TABLE_POST_OPERATION 2023-07-22 07:10:45,664 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=114, state=SUCCESS; CreateTableProcedure table=unmovedTable in 427 msec 2023-07-22 07:10:45,843 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.MasterRpcServices(1230): Checking to see if procedure is done pid=114 2023-07-22 07:10:45,844 INFO [Listener at localhost/46507] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:unmovedTable, procId: 114 completed 2023-07-22 07:10:45,844 DEBUG [Listener at localhost/46507] hbase.HBaseTestingUtility(3430): Waiting until all regions of table unmovedTable get assigned. Timeout = 60000ms 2023-07-22 07:10:45,844 INFO [Listener at localhost/46507] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-22 07:10:45,847 INFO [Listener at localhost/46507] hbase.HBaseTestingUtility(3484): All regions for table unmovedTable assigned to meta. Checking AM states. 2023-07-22 07:10:45,847 INFO [Listener at localhost/46507] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-22 07:10:45,847 INFO [Listener at localhost/46507] hbase.HBaseTestingUtility(3504): All regions for table unmovedTable assigned. 2023-07-22 07:10:45,849 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [unmovedTable] to rsgroup normal 2023-07-22 07:10:45,851 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-22 07:10:45,851 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/normal 2023-07-22 07:10:45,852 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-22 07:10:45,852 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'unmovedTable' 2023-07-22 07:10:45,852 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-22 07:10:45,852 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-22 07:10:45,854 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminServer(339): Moving region(s) for table unmovedTable to RSGroup normal 2023-07-22 07:10:45,854 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminServer(345): Moving region b6b55dee15272a98eb856a00e0a41f50 to RSGroup normal 2023-07-22 07:10:45,855 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] procedure2.ProcedureExecutor(1029): Stored pid=117, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=unmovedTable, region=b6b55dee15272a98eb856a00e0a41f50, REOPEN/MOVE 2023-07-22 07:10:45,855 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminServer(286): Moving 1 region(s) to group normal, current retry=0 2023-07-22 07:10:45,855 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=117, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=unmovedTable, region=b6b55dee15272a98eb856a00e0a41f50, REOPEN/MOVE 2023-07-22 07:10:45,855 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=117 updating hbase:meta row=b6b55dee15272a98eb856a00e0a41f50, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,41787,1690009825478 2023-07-22 07:10:45,856 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"unmovedTable,,1690009845235.b6b55dee15272a98eb856a00e0a41f50.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1690009845855"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690009845855"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690009845855"}]},"ts":"1690009845855"} 2023-07-22 07:10:45,857 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=118, ppid=117, state=RUNNABLE; CloseRegionProcedure b6b55dee15272a98eb856a00e0a41f50, server=jenkins-hbase4.apache.org,41787,1690009825478}] 2023-07-22 07:10:46,010 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close b6b55dee15272a98eb856a00e0a41f50 2023-07-22 07:10:46,011 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing b6b55dee15272a98eb856a00e0a41f50, disabling compactions & flushes 2023-07-22 07:10:46,011 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region unmovedTable,,1690009845235.b6b55dee15272a98eb856a00e0a41f50. 2023-07-22 07:10:46,011 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on unmovedTable,,1690009845235.b6b55dee15272a98eb856a00e0a41f50. 2023-07-22 07:10:46,011 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on unmovedTable,,1690009845235.b6b55dee15272a98eb856a00e0a41f50. after waiting 0 ms 2023-07-22 07:10:46,011 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region unmovedTable,,1690009845235.b6b55dee15272a98eb856a00e0a41f50. 2023-07-22 07:10:46,015 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/data/default/unmovedTable/b6b55dee15272a98eb856a00e0a41f50/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-22 07:10:46,016 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed unmovedTable,,1690009845235.b6b55dee15272a98eb856a00e0a41f50. 2023-07-22 07:10:46,016 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for b6b55dee15272a98eb856a00e0a41f50: 2023-07-22 07:10:46,016 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding b6b55dee15272a98eb856a00e0a41f50 move to jenkins-hbase4.apache.org,39057,1690009825637 record at close sequenceid=2 2023-07-22 07:10:46,018 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed b6b55dee15272a98eb856a00e0a41f50 2023-07-22 07:10:46,019 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=117 updating hbase:meta row=b6b55dee15272a98eb856a00e0a41f50, regionState=CLOSED 2023-07-22 07:10:46,019 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"unmovedTable,,1690009845235.b6b55dee15272a98eb856a00e0a41f50.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1690009846019"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690009846019"}]},"ts":"1690009846019"} 2023-07-22 07:10:46,022 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=118, resume processing ppid=117 2023-07-22 07:10:46,022 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=118, ppid=117, state=SUCCESS; CloseRegionProcedure b6b55dee15272a98eb856a00e0a41f50, server=jenkins-hbase4.apache.org,41787,1690009825478 in 163 msec 2023-07-22 07:10:46,022 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=117, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=unmovedTable, region=b6b55dee15272a98eb856a00e0a41f50, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,39057,1690009825637; forceNewPlan=false, retain=false 2023-07-22 07:10:46,173 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=117 updating hbase:meta row=b6b55dee15272a98eb856a00e0a41f50, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,39057,1690009825637 2023-07-22 07:10:46,173 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"unmovedTable,,1690009845235.b6b55dee15272a98eb856a00e0a41f50.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1690009846173"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690009846173"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690009846173"}]},"ts":"1690009846173"} 2023-07-22 07:10:46,176 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=119, ppid=117, state=RUNNABLE; OpenRegionProcedure b6b55dee15272a98eb856a00e0a41f50, server=jenkins-hbase4.apache.org,39057,1690009825637}] 2023-07-22 07:10:46,331 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open unmovedTable,,1690009845235.b6b55dee15272a98eb856a00e0a41f50. 2023-07-22 07:10:46,331 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => b6b55dee15272a98eb856a00e0a41f50, NAME => 'unmovedTable,,1690009845235.b6b55dee15272a98eb856a00e0a41f50.', STARTKEY => '', ENDKEY => ''} 2023-07-22 07:10:46,331 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table unmovedTable b6b55dee15272a98eb856a00e0a41f50 2023-07-22 07:10:46,331 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated unmovedTable,,1690009845235.b6b55dee15272a98eb856a00e0a41f50.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-22 07:10:46,332 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for b6b55dee15272a98eb856a00e0a41f50 2023-07-22 07:10:46,332 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for b6b55dee15272a98eb856a00e0a41f50 2023-07-22 07:10:46,333 INFO [StoreOpener-b6b55dee15272a98eb856a00e0a41f50-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family ut of region b6b55dee15272a98eb856a00e0a41f50 2023-07-22 07:10:46,334 DEBUG [StoreOpener-b6b55dee15272a98eb856a00e0a41f50-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/data/default/unmovedTable/b6b55dee15272a98eb856a00e0a41f50/ut 2023-07-22 07:10:46,334 DEBUG [StoreOpener-b6b55dee15272a98eb856a00e0a41f50-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/data/default/unmovedTable/b6b55dee15272a98eb856a00e0a41f50/ut 2023-07-22 07:10:46,335 INFO [StoreOpener-b6b55dee15272a98eb856a00e0a41f50-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region b6b55dee15272a98eb856a00e0a41f50 columnFamilyName ut 2023-07-22 07:10:46,335 INFO [StoreOpener-b6b55dee15272a98eb856a00e0a41f50-1] regionserver.HStore(310): Store=b6b55dee15272a98eb856a00e0a41f50/ut, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-22 07:10:46,336 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/data/default/unmovedTable/b6b55dee15272a98eb856a00e0a41f50 2023-07-22 07:10:46,337 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/data/default/unmovedTable/b6b55dee15272a98eb856a00e0a41f50 2023-07-22 07:10:46,339 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for b6b55dee15272a98eb856a00e0a41f50 2023-07-22 07:10:46,340 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened b6b55dee15272a98eb856a00e0a41f50; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11475503040, jitterRate=0.06873950362205505}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-22 07:10:46,340 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for b6b55dee15272a98eb856a00e0a41f50: 2023-07-22 07:10:46,341 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for unmovedTable,,1690009845235.b6b55dee15272a98eb856a00e0a41f50., pid=119, masterSystemTime=1690009846327 2023-07-22 07:10:46,342 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for unmovedTable,,1690009845235.b6b55dee15272a98eb856a00e0a41f50. 2023-07-22 07:10:46,342 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened unmovedTable,,1690009845235.b6b55dee15272a98eb856a00e0a41f50. 2023-07-22 07:10:46,343 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=117 updating hbase:meta row=b6b55dee15272a98eb856a00e0a41f50, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,39057,1690009825637 2023-07-22 07:10:46,343 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"unmovedTable,,1690009845235.b6b55dee15272a98eb856a00e0a41f50.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1690009846343"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690009846343"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690009846343"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690009846343"}]},"ts":"1690009846343"} 2023-07-22 07:10:46,346 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=119, resume processing ppid=117 2023-07-22 07:10:46,346 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=119, ppid=117, state=SUCCESS; OpenRegionProcedure b6b55dee15272a98eb856a00e0a41f50, server=jenkins-hbase4.apache.org,39057,1690009825637 in 168 msec 2023-07-22 07:10:46,348 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=117, state=SUCCESS; TransitRegionStateProcedure table=unmovedTable, region=b6b55dee15272a98eb856a00e0a41f50, REOPEN/MOVE in 492 msec 2023-07-22 07:10:46,855 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] procedure.ProcedureSyncWait(216): waitFor pid=117 2023-07-22 07:10:46,855 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminServer(369): All regions from table(s) [unmovedTable] moved to target group normal. 2023-07-22 07:10:46,855 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-22 07:10:46,858 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-22 07:10:46,858 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-22 07:10:46,861 INFO [Listener at localhost/46507] hbase.Waiter(180): Waiting up to [1,000] milli-secs(wait.for.ratio=[1]) 2023-07-22 07:10:46,861 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=unmovedTable 2023-07-22 07:10:46,862 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-22 07:10:46,862 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=normal 2023-07-22 07:10:46,862 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-22 07:10:46,863 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=unmovedTable 2023-07-22 07:10:46,863 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-22 07:10:46,864 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(408): Client=jenkins//172.31.14.131 rename rsgroup from oldgroup to newgroup 2023-07-22 07:10:46,866 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/normal 2023-07-22 07:10:46,866 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-22 07:10:46,866 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-22 07:10:46,867 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/newgroup 2023-07-22 07:10:46,868 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 9 2023-07-22 07:10:46,869 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RenameRSGroup 2023-07-22 07:10:46,872 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-22 07:10:46,872 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-22 07:10:46,874 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=newgroup 2023-07-22 07:10:46,874 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-22 07:10:46,875 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=testRename 2023-07-22 07:10:46,875 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-22 07:10:46,876 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=unmovedTable 2023-07-22 07:10:46,876 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-22 07:10:46,879 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-22 07:10:46,879 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-22 07:10:46,881 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [unmovedTable] to rsgroup default 2023-07-22 07:10:46,883 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/normal 2023-07-22 07:10:46,883 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-22 07:10:46,883 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-22 07:10:46,884 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/newgroup 2023-07-22 07:10:46,884 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-22 07:10:46,889 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminServer(339): Moving region(s) for table unmovedTable to RSGroup default 2023-07-22 07:10:46,889 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminServer(345): Moving region b6b55dee15272a98eb856a00e0a41f50 to RSGroup default 2023-07-22 07:10:46,889 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] procedure2.ProcedureExecutor(1029): Stored pid=120, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=unmovedTable, region=b6b55dee15272a98eb856a00e0a41f50, REOPEN/MOVE 2023-07-22 07:10:46,889 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminServer(286): Moving 1 region(s) to group default, current retry=0 2023-07-22 07:10:46,890 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=120, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=unmovedTable, region=b6b55dee15272a98eb856a00e0a41f50, REOPEN/MOVE 2023-07-22 07:10:46,890 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=120 updating hbase:meta row=b6b55dee15272a98eb856a00e0a41f50, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,39057,1690009825637 2023-07-22 07:10:46,890 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"unmovedTable,,1690009845235.b6b55dee15272a98eb856a00e0a41f50.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1690009846890"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690009846890"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690009846890"}]},"ts":"1690009846890"} 2023-07-22 07:10:46,892 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=121, ppid=120, state=RUNNABLE; CloseRegionProcedure b6b55dee15272a98eb856a00e0a41f50, server=jenkins-hbase4.apache.org,39057,1690009825637}] 2023-07-22 07:10:47,044 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close b6b55dee15272a98eb856a00e0a41f50 2023-07-22 07:10:47,046 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing b6b55dee15272a98eb856a00e0a41f50, disabling compactions & flushes 2023-07-22 07:10:47,046 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region unmovedTable,,1690009845235.b6b55dee15272a98eb856a00e0a41f50. 2023-07-22 07:10:47,046 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on unmovedTable,,1690009845235.b6b55dee15272a98eb856a00e0a41f50. 2023-07-22 07:10:47,046 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on unmovedTable,,1690009845235.b6b55dee15272a98eb856a00e0a41f50. after waiting 0 ms 2023-07-22 07:10:47,046 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region unmovedTable,,1690009845235.b6b55dee15272a98eb856a00e0a41f50. 2023-07-22 07:10:47,049 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/data/default/unmovedTable/b6b55dee15272a98eb856a00e0a41f50/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-22 07:10:47,050 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed unmovedTable,,1690009845235.b6b55dee15272a98eb856a00e0a41f50. 2023-07-22 07:10:47,050 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for b6b55dee15272a98eb856a00e0a41f50: 2023-07-22 07:10:47,050 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding b6b55dee15272a98eb856a00e0a41f50 move to jenkins-hbase4.apache.org,41787,1690009825478 record at close sequenceid=5 2023-07-22 07:10:47,052 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed b6b55dee15272a98eb856a00e0a41f50 2023-07-22 07:10:47,052 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=120 updating hbase:meta row=b6b55dee15272a98eb856a00e0a41f50, regionState=CLOSED 2023-07-22 07:10:47,052 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"unmovedTable,,1690009845235.b6b55dee15272a98eb856a00e0a41f50.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1690009847052"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690009847052"}]},"ts":"1690009847052"} 2023-07-22 07:10:47,054 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=121, resume processing ppid=120 2023-07-22 07:10:47,055 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=121, ppid=120, state=SUCCESS; CloseRegionProcedure b6b55dee15272a98eb856a00e0a41f50, server=jenkins-hbase4.apache.org,39057,1690009825637 in 162 msec 2023-07-22 07:10:47,055 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=120, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=unmovedTable, region=b6b55dee15272a98eb856a00e0a41f50, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,41787,1690009825478; forceNewPlan=false, retain=false 2023-07-22 07:10:47,206 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=120 updating hbase:meta row=b6b55dee15272a98eb856a00e0a41f50, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,41787,1690009825478 2023-07-22 07:10:47,206 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"unmovedTable,,1690009845235.b6b55dee15272a98eb856a00e0a41f50.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1690009847205"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690009847205"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690009847205"}]},"ts":"1690009847205"} 2023-07-22 07:10:47,208 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=122, ppid=120, state=RUNNABLE; OpenRegionProcedure b6b55dee15272a98eb856a00e0a41f50, server=jenkins-hbase4.apache.org,41787,1690009825478}] 2023-07-22 07:10:47,363 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open unmovedTable,,1690009845235.b6b55dee15272a98eb856a00e0a41f50. 2023-07-22 07:10:47,363 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => b6b55dee15272a98eb856a00e0a41f50, NAME => 'unmovedTable,,1690009845235.b6b55dee15272a98eb856a00e0a41f50.', STARTKEY => '', ENDKEY => ''} 2023-07-22 07:10:47,363 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table unmovedTable b6b55dee15272a98eb856a00e0a41f50 2023-07-22 07:10:47,363 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated unmovedTable,,1690009845235.b6b55dee15272a98eb856a00e0a41f50.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-22 07:10:47,364 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for b6b55dee15272a98eb856a00e0a41f50 2023-07-22 07:10:47,364 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for b6b55dee15272a98eb856a00e0a41f50 2023-07-22 07:10:47,366 INFO [StoreOpener-b6b55dee15272a98eb856a00e0a41f50-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family ut of region b6b55dee15272a98eb856a00e0a41f50 2023-07-22 07:10:47,367 DEBUG [StoreOpener-b6b55dee15272a98eb856a00e0a41f50-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/data/default/unmovedTable/b6b55dee15272a98eb856a00e0a41f50/ut 2023-07-22 07:10:47,367 DEBUG [StoreOpener-b6b55dee15272a98eb856a00e0a41f50-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/data/default/unmovedTable/b6b55dee15272a98eb856a00e0a41f50/ut 2023-07-22 07:10:47,367 INFO [StoreOpener-b6b55dee15272a98eb856a00e0a41f50-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region b6b55dee15272a98eb856a00e0a41f50 columnFamilyName ut 2023-07-22 07:10:47,368 INFO [StoreOpener-b6b55dee15272a98eb856a00e0a41f50-1] regionserver.HStore(310): Store=b6b55dee15272a98eb856a00e0a41f50/ut, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-22 07:10:47,369 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/data/default/unmovedTable/b6b55dee15272a98eb856a00e0a41f50 2023-07-22 07:10:47,370 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/data/default/unmovedTable/b6b55dee15272a98eb856a00e0a41f50 2023-07-22 07:10:47,372 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for b6b55dee15272a98eb856a00e0a41f50 2023-07-22 07:10:47,373 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened b6b55dee15272a98eb856a00e0a41f50; next sequenceid=8; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10020972800, jitterRate=-0.06672418117523193}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-22 07:10:47,373 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for b6b55dee15272a98eb856a00e0a41f50: 2023-07-22 07:10:47,374 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for unmovedTable,,1690009845235.b6b55dee15272a98eb856a00e0a41f50., pid=122, masterSystemTime=1690009847359 2023-07-22 07:10:47,376 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for unmovedTable,,1690009845235.b6b55dee15272a98eb856a00e0a41f50. 2023-07-22 07:10:47,376 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened unmovedTable,,1690009845235.b6b55dee15272a98eb856a00e0a41f50. 2023-07-22 07:10:47,376 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=120 updating hbase:meta row=b6b55dee15272a98eb856a00e0a41f50, regionState=OPEN, openSeqNum=8, regionLocation=jenkins-hbase4.apache.org,41787,1690009825478 2023-07-22 07:10:47,376 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"unmovedTable,,1690009845235.b6b55dee15272a98eb856a00e0a41f50.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1690009847376"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690009847376"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690009847376"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690009847376"}]},"ts":"1690009847376"} 2023-07-22 07:10:47,379 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=122, resume processing ppid=120 2023-07-22 07:10:47,379 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=122, ppid=120, state=SUCCESS; OpenRegionProcedure b6b55dee15272a98eb856a00e0a41f50, server=jenkins-hbase4.apache.org,41787,1690009825478 in 170 msec 2023-07-22 07:10:47,379 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'testRename' 2023-07-22 07:10:47,381 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=120, state=SUCCESS; TransitRegionStateProcedure table=unmovedTable, region=b6b55dee15272a98eb856a00e0a41f50, REOPEN/MOVE in 490 msec 2023-07-22 07:10:47,890 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] procedure.ProcedureSyncWait(216): waitFor pid=120 2023-07-22 07:10:47,890 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminServer(369): All regions from table(s) [unmovedTable] moved to target group default. 2023-07-22 07:10:47,890 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-22 07:10:47,891 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:39057] to rsgroup default 2023-07-22 07:10:47,893 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/normal 2023-07-22 07:10:47,894 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-22 07:10:47,894 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-22 07:10:47,895 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/newgroup 2023-07-22 07:10:47,895 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-22 07:10:47,898 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group normal, current retry=0 2023-07-22 07:10:47,898 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,39057,1690009825637] are moved back to normal 2023-07-22 07:10:47,898 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminServer(438): Move servers done: normal => default 2023-07-22 07:10:47,898 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-22 07:10:47,899 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup normal 2023-07-22 07:10:47,902 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-22 07:10:47,903 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-22 07:10:47,903 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/newgroup 2023-07-22 07:10:47,903 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 7 2023-07-22 07:10:47,904 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-22 07:10:47,905 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-22 07:10:47,905 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-22 07:10:47,905 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-22 07:10:47,906 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-22 07:10:47,906 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-22 07:10:47,906 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-22 07:10:47,909 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-22 07:10:47,909 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/newgroup 2023-07-22 07:10:47,909 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-22 07:10:47,911 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-22 07:10:47,912 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [testRename] to rsgroup default 2023-07-22 07:10:47,914 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-22 07:10:47,914 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/newgroup 2023-07-22 07:10:47,914 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-22 07:10:47,915 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminServer(339): Moving region(s) for table testRename to RSGroup default 2023-07-22 07:10:47,915 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminServer(345): Moving region d1da522f8583716887d9f5a3f6b36be8 to RSGroup default 2023-07-22 07:10:47,916 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] procedure2.ProcedureExecutor(1029): Stored pid=123, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=testRename, region=d1da522f8583716887d9f5a3f6b36be8, REOPEN/MOVE 2023-07-22 07:10:47,916 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminServer(286): Moving 1 region(s) to group default, current retry=0 2023-07-22 07:10:47,916 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=123, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=testRename, region=d1da522f8583716887d9f5a3f6b36be8, REOPEN/MOVE 2023-07-22 07:10:47,917 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=123 updating hbase:meta row=d1da522f8583716887d9f5a3f6b36be8, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,33357,1690009829125 2023-07-22 07:10:47,917 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"testRename,,1690009843576.d1da522f8583716887d9f5a3f6b36be8.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1690009847917"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690009847917"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690009847917"}]},"ts":"1690009847917"} 2023-07-22 07:10:47,918 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=124, ppid=123, state=RUNNABLE; CloseRegionProcedure d1da522f8583716887d9f5a3f6b36be8, server=jenkins-hbase4.apache.org,33357,1690009829125}] 2023-07-22 07:10:48,071 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close d1da522f8583716887d9f5a3f6b36be8 2023-07-22 07:10:48,072 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing d1da522f8583716887d9f5a3f6b36be8, disabling compactions & flushes 2023-07-22 07:10:48,072 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region testRename,,1690009843576.d1da522f8583716887d9f5a3f6b36be8. 2023-07-22 07:10:48,072 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on testRename,,1690009843576.d1da522f8583716887d9f5a3f6b36be8. 2023-07-22 07:10:48,072 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on testRename,,1690009843576.d1da522f8583716887d9f5a3f6b36be8. after waiting 0 ms 2023-07-22 07:10:48,072 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region testRename,,1690009843576.d1da522f8583716887d9f5a3f6b36be8. 2023-07-22 07:10:48,076 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/data/default/testRename/d1da522f8583716887d9f5a3f6b36be8/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-22 07:10:48,076 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed testRename,,1690009843576.d1da522f8583716887d9f5a3f6b36be8. 2023-07-22 07:10:48,077 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for d1da522f8583716887d9f5a3f6b36be8: 2023-07-22 07:10:48,077 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding d1da522f8583716887d9f5a3f6b36be8 move to jenkins-hbase4.apache.org,39057,1690009825637 record at close sequenceid=5 2023-07-22 07:10:48,078 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed d1da522f8583716887d9f5a3f6b36be8 2023-07-22 07:10:48,078 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=123 updating hbase:meta row=d1da522f8583716887d9f5a3f6b36be8, regionState=CLOSED 2023-07-22 07:10:48,079 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"testRename,,1690009843576.d1da522f8583716887d9f5a3f6b36be8.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1690009848078"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690009848078"}]},"ts":"1690009848078"} 2023-07-22 07:10:48,081 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=124, resume processing ppid=123 2023-07-22 07:10:48,081 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=124, ppid=123, state=SUCCESS; CloseRegionProcedure d1da522f8583716887d9f5a3f6b36be8, server=jenkins-hbase4.apache.org,33357,1690009829125 in 162 msec 2023-07-22 07:10:48,082 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=123, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=testRename, region=d1da522f8583716887d9f5a3f6b36be8, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,39057,1690009825637; forceNewPlan=false, retain=false 2023-07-22 07:10:48,232 INFO [jenkins-hbase4:37061] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-22 07:10:48,233 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=123 updating hbase:meta row=d1da522f8583716887d9f5a3f6b36be8, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,39057,1690009825637 2023-07-22 07:10:48,233 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"testRename,,1690009843576.d1da522f8583716887d9f5a3f6b36be8.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1690009848233"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690009848233"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690009848233"}]},"ts":"1690009848233"} 2023-07-22 07:10:48,235 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=125, ppid=123, state=RUNNABLE; OpenRegionProcedure d1da522f8583716887d9f5a3f6b36be8, server=jenkins-hbase4.apache.org,39057,1690009825637}] 2023-07-22 07:10:48,392 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open testRename,,1690009843576.d1da522f8583716887d9f5a3f6b36be8. 2023-07-22 07:10:48,392 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => d1da522f8583716887d9f5a3f6b36be8, NAME => 'testRename,,1690009843576.d1da522f8583716887d9f5a3f6b36be8.', STARTKEY => '', ENDKEY => ''} 2023-07-22 07:10:48,392 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table testRename d1da522f8583716887d9f5a3f6b36be8 2023-07-22 07:10:48,392 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated testRename,,1690009843576.d1da522f8583716887d9f5a3f6b36be8.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-22 07:10:48,392 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for d1da522f8583716887d9f5a3f6b36be8 2023-07-22 07:10:48,392 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for d1da522f8583716887d9f5a3f6b36be8 2023-07-22 07:10:48,394 INFO [StoreOpener-d1da522f8583716887d9f5a3f6b36be8-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family tr of region d1da522f8583716887d9f5a3f6b36be8 2023-07-22 07:10:48,395 DEBUG [StoreOpener-d1da522f8583716887d9f5a3f6b36be8-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/data/default/testRename/d1da522f8583716887d9f5a3f6b36be8/tr 2023-07-22 07:10:48,395 DEBUG [StoreOpener-d1da522f8583716887d9f5a3f6b36be8-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/data/default/testRename/d1da522f8583716887d9f5a3f6b36be8/tr 2023-07-22 07:10:48,395 INFO [StoreOpener-d1da522f8583716887d9f5a3f6b36be8-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region d1da522f8583716887d9f5a3f6b36be8 columnFamilyName tr 2023-07-22 07:10:48,396 INFO [StoreOpener-d1da522f8583716887d9f5a3f6b36be8-1] regionserver.HStore(310): Store=d1da522f8583716887d9f5a3f6b36be8/tr, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-22 07:10:48,396 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/data/default/testRename/d1da522f8583716887d9f5a3f6b36be8 2023-07-22 07:10:48,398 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/data/default/testRename/d1da522f8583716887d9f5a3f6b36be8 2023-07-22 07:10:48,401 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for d1da522f8583716887d9f5a3f6b36be8 2023-07-22 07:10:48,402 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened d1da522f8583716887d9f5a3f6b36be8; next sequenceid=8; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10871789280, jitterRate=0.012514278292655945}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-22 07:10:48,402 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for d1da522f8583716887d9f5a3f6b36be8: 2023-07-22 07:10:48,402 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for testRename,,1690009843576.d1da522f8583716887d9f5a3f6b36be8., pid=125, masterSystemTime=1690009848388 2023-07-22 07:10:48,403 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for testRename,,1690009843576.d1da522f8583716887d9f5a3f6b36be8. 2023-07-22 07:10:48,404 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened testRename,,1690009843576.d1da522f8583716887d9f5a3f6b36be8. 2023-07-22 07:10:48,404 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=123 updating hbase:meta row=d1da522f8583716887d9f5a3f6b36be8, regionState=OPEN, openSeqNum=8, regionLocation=jenkins-hbase4.apache.org,39057,1690009825637 2023-07-22 07:10:48,404 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"testRename,,1690009843576.d1da522f8583716887d9f5a3f6b36be8.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1690009848404"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690009848404"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690009848404"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690009848404"}]},"ts":"1690009848404"} 2023-07-22 07:10:48,406 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=125, resume processing ppid=123 2023-07-22 07:10:48,406 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=125, ppid=123, state=SUCCESS; OpenRegionProcedure d1da522f8583716887d9f5a3f6b36be8, server=jenkins-hbase4.apache.org,39057,1690009825637 in 170 msec 2023-07-22 07:10:48,408 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=123, state=SUCCESS; TransitRegionStateProcedure table=testRename, region=d1da522f8583716887d9f5a3f6b36be8, REOPEN/MOVE in 491 msec 2023-07-22 07:10:48,916 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] procedure.ProcedureSyncWait(216): waitFor pid=123 2023-07-22 07:10:48,916 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminServer(369): All regions from table(s) [testRename] moved to target group default. 2023-07-22 07:10:48,916 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-22 07:10:48,917 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:34133, jenkins-hbase4.apache.org:33357] to rsgroup default 2023-07-22 07:10:48,920 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-22 07:10:48,920 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/newgroup 2023-07-22 07:10:48,920 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-22 07:10:48,923 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group newgroup, current retry=0 2023-07-22 07:10:48,923 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,33357,1690009829125, jenkins-hbase4.apache.org,34133,1690009825283] are moved back to newgroup 2023-07-22 07:10:48,923 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminServer(438): Move servers done: newgroup => default 2023-07-22 07:10:48,923 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-22 07:10:48,924 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup newgroup 2023-07-22 07:10:48,927 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-22 07:10:48,927 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-22 07:10:48,929 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-22 07:10:48,931 INFO [Listener at localhost/46507] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-22 07:10:48,932 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-22 07:10:48,933 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-22 07:10:48,934 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-22 07:10:48,935 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-22 07:10:48,941 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-22 07:10:48,943 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-22 07:10:48,943 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-22 07:10:48,945 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:37061] to rsgroup master 2023-07-22 07:10:48,945 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37061 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-22 07:10:48,945 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] ipc.CallRunner(144): callId: 764 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:38908 deadline: 1690011048945, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37061 is either offline or it does not exist. 2023-07-22 07:10:48,946 WARN [Listener at localhost/46507] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37061 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37061 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-22 07:10:48,947 INFO [Listener at localhost/46507] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-22 07:10:48,948 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-22 07:10:48,948 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-22 07:10:48,948 INFO [Listener at localhost/46507] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:33357, jenkins-hbase4.apache.org:34133, jenkins-hbase4.apache.org:39057, jenkins-hbase4.apache.org:41787], Tables:[hbase:meta, hbase:namespace, unmovedTable, hbase:rsgroup, testRename], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-22 07:10:48,949 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-22 07:10:48,949 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-22 07:10:48,969 INFO [Listener at localhost/46507] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testRenameRSGroup Thread=504 (was 511), OpenFileDescriptor=778 (was 789), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=378 (was 404), ProcessCount=180 (was 180), AvailableMemoryMB=6661 (was 6772) 2023-07-22 07:10:48,969 WARN [Listener at localhost/46507] hbase.ResourceChecker(130): Thread=504 is superior to 500 2023-07-22 07:10:48,987 INFO [Listener at localhost/46507] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testBogusArgs Thread=504, OpenFileDescriptor=778, MaxFileDescriptor=60000, SystemLoadAverage=378, ProcessCount=180, AvailableMemoryMB=6660 2023-07-22 07:10:48,987 WARN [Listener at localhost/46507] hbase.ResourceChecker(130): Thread=504 is superior to 500 2023-07-22 07:10:48,987 INFO [Listener at localhost/46507] rsgroup.TestRSGroupsBase(132): testBogusArgs 2023-07-22 07:10:48,991 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-22 07:10:48,991 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-22 07:10:48,992 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-22 07:10:48,992 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-22 07:10:48,992 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-22 07:10:48,992 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-22 07:10:48,992 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-22 07:10:48,993 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-22 07:10:48,996 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-22 07:10:48,996 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-22 07:10:48,998 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-22 07:10:49,000 INFO [Listener at localhost/46507] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-22 07:10:49,001 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-22 07:10:49,003 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-22 07:10:49,003 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-22 07:10:49,005 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-22 07:10:49,006 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-22 07:10:49,011 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-22 07:10:49,011 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-22 07:10:49,014 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:37061] to rsgroup master 2023-07-22 07:10:49,015 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37061 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-22 07:10:49,015 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] ipc.CallRunner(144): callId: 792 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:38908 deadline: 1690011049014, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37061 is either offline or it does not exist. 2023-07-22 07:10:49,015 WARN [Listener at localhost/46507] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37061 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37061 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-22 07:10:49,017 INFO [Listener at localhost/46507] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-22 07:10:49,019 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-22 07:10:49,020 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-22 07:10:49,023 INFO [Listener at localhost/46507] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:33357, jenkins-hbase4.apache.org:34133, jenkins-hbase4.apache.org:39057, jenkins-hbase4.apache.org:41787], Tables:[hbase:meta, hbase:namespace, unmovedTable, hbase:rsgroup, testRename], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-22 07:10:49,024 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-22 07:10:49,024 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-22 07:10:49,025 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=nonexistent 2023-07-22 07:10:49,025 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-22 07:10:49,032 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(334): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, server=bogus:123 2023-07-22 07:10:49,032 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfServer 2023-07-22 07:10:49,034 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=bogus 2023-07-22 07:10:49,034 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-22 07:10:49,035 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup bogus 2023-07-22 07:10:49,035 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup bogus does not exist at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.removeRSGroup(RSGroupAdminServer.java:486) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.removeRSGroup(RSGroupAdminEndpoint.java:278) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16208) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-22 07:10:49,035 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] ipc.CallRunner(144): callId: 804 service: MasterService methodName: ExecMasterService size: 87 connection: 172.31.14.131:38908 deadline: 1690011049035, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup bogus does not exist 2023-07-22 07:10:49,038 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [bogus:123] to rsgroup bogus 2023-07-22 07:10:49,038 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup does not exist: bogus at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.getAndCheckRSGroupInfo(RSGroupAdminServer.java:115) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:398) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-22 07:10:49,038 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] ipc.CallRunner(144): callId: 807 service: MasterService methodName: ExecMasterService size: 96 connection: 172.31.14.131:38908 deadline: 1690011049038, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup does not exist: bogus 2023-07-22 07:10:49,041 DEBUG [Listener at localhost/46507-EventThread] zookeeper.ZKWatcher(600): master:37061-0x1018bdde7740000, quorum=127.0.0.1:56256, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/balancer 2023-07-22 07:10:49,041 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.MasterRpcServices(492): Client=jenkins//172.31.14.131 set balanceSwitch=true 2023-07-22 07:10:49,047 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(292): Client=jenkins//172.31.14.131 balance rsgroup, group=bogus 2023-07-22 07:10:49,047 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup does not exist: bogus at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.balanceRSGroup(RSGroupAdminServer.java:523) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.balanceRSGroup(RSGroupAdminEndpoint.java:299) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16213) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-22 07:10:49,047 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] ipc.CallRunner(144): callId: 811 service: MasterService methodName: ExecMasterService size: 88 connection: 172.31.14.131:38908 deadline: 1690011049046, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup does not exist: bogus 2023-07-22 07:10:49,052 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-22 07:10:49,052 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-22 07:10:49,053 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-22 07:10:49,053 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-22 07:10:49,053 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-22 07:10:49,054 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-22 07:10:49,054 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-22 07:10:49,055 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-22 07:10:49,058 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-22 07:10:49,059 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-22 07:10:49,060 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-22 07:10:49,063 INFO [Listener at localhost/46507] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-22 07:10:49,064 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-22 07:10:49,066 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-22 07:10:49,066 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-22 07:10:49,069 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-22 07:10:49,070 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-22 07:10:49,073 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-22 07:10:49,073 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-22 07:10:49,075 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:37061] to rsgroup master 2023-07-22 07:10:49,078 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37061 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-22 07:10:49,078 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] ipc.CallRunner(144): callId: 835 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:38908 deadline: 1690011049075, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37061 is either offline or it does not exist. 2023-07-22 07:10:49,078 WARN [Listener at localhost/46507] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37061 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37061 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-22 07:10:49,081 INFO [Listener at localhost/46507] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-22 07:10:49,082 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-22 07:10:49,082 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-22 07:10:49,082 INFO [Listener at localhost/46507] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:33357, jenkins-hbase4.apache.org:34133, jenkins-hbase4.apache.org:39057, jenkins-hbase4.apache.org:41787], Tables:[hbase:meta, hbase:namespace, unmovedTable, hbase:rsgroup, testRename], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-22 07:10:49,083 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-22 07:10:49,083 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-22 07:10:49,101 INFO [Listener at localhost/46507] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testBogusArgs Thread=507 (was 504) Potentially hanging thread: hconnection-0x2e79eb29-shared-pool-22 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x50c4626a-shared-pool-25 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x50c4626a-shared-pool-24 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x2e79eb29-shared-pool-23 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=778 (was 778), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=378 (was 378), ProcessCount=180 (was 180), AvailableMemoryMB=6648 (was 6660) 2023-07-22 07:10:49,101 WARN [Listener at localhost/46507] hbase.ResourceChecker(130): Thread=507 is superior to 500 2023-07-22 07:10:49,120 INFO [Listener at localhost/46507] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testDisabledTableMove Thread=507, OpenFileDescriptor=778, MaxFileDescriptor=60000, SystemLoadAverage=378, ProcessCount=180, AvailableMemoryMB=6645 2023-07-22 07:10:49,120 WARN [Listener at localhost/46507] hbase.ResourceChecker(130): Thread=507 is superior to 500 2023-07-22 07:10:49,120 INFO [Listener at localhost/46507] rsgroup.TestRSGroupsBase(132): testDisabledTableMove 2023-07-22 07:10:49,124 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-22 07:10:49,124 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-22 07:10:49,125 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-22 07:10:49,125 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-22 07:10:49,125 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-22 07:10:49,126 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-22 07:10:49,126 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-22 07:10:49,126 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-22 07:10:49,130 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-22 07:10:49,130 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-22 07:10:49,131 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-22 07:10:49,134 INFO [Listener at localhost/46507] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-22 07:10:49,135 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-22 07:10:49,138 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-22 07:10:49,138 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-22 07:10:49,140 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-22 07:10:49,154 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-22 07:10:49,168 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-22 07:10:49,168 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-22 07:10:49,172 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:37061] to rsgroup master 2023-07-22 07:10:49,172 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37061 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-22 07:10:49,172 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] ipc.CallRunner(144): callId: 863 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:38908 deadline: 1690011049172, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37061 is either offline or it does not exist. 2023-07-22 07:10:49,172 WARN [Listener at localhost/46507] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37061 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37061 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-22 07:10:49,174 INFO [Listener at localhost/46507] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-22 07:10:49,175 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-22 07:10:49,175 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-22 07:10:49,176 INFO [Listener at localhost/46507] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:33357, jenkins-hbase4.apache.org:34133, jenkins-hbase4.apache.org:39057, jenkins-hbase4.apache.org:41787], Tables:[hbase:meta, hbase:namespace, unmovedTable, hbase:rsgroup, testRename], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-22 07:10:49,177 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-22 07:10:49,177 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-22 07:10:49,177 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-22 07:10:49,178 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-22 07:10:49,178 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup Group_testDisabledTableMove_2099860274 2023-07-22 07:10:49,182 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testDisabledTableMove_2099860274 2023-07-22 07:10:49,187 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-22 07:10:49,188 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-22 07:10:49,188 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-22 07:10:49,190 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-22 07:10:49,193 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-22 07:10:49,193 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-22 07:10:49,195 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:34133, jenkins-hbase4.apache.org:33357] to rsgroup Group_testDisabledTableMove_2099860274 2023-07-22 07:10:49,197 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-22 07:10:49,198 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testDisabledTableMove_2099860274 2023-07-22 07:10:49,198 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-22 07:10:49,200 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-22 07:10:49,203 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group default, current retry=0 2023-07-22 07:10:49,203 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,33357,1690009829125, jenkins-hbase4.apache.org,34133,1690009825283] are moved back to default 2023-07-22 07:10:49,203 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminServer(438): Move servers done: default => Group_testDisabledTableMove_2099860274 2023-07-22 07:10:49,203 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-22 07:10:49,206 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-22 07:10:49,206 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-22 07:10:49,208 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=Group_testDisabledTableMove_2099860274 2023-07-22 07:10:49,208 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-22 07:10:49,210 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'Group_testDisabledTableMove', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-22 07:10:49,211 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] procedure2.ProcedureExecutor(1029): Stored pid=126, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=Group_testDisabledTableMove 2023-07-22 07:10:49,213 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=126, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=Group_testDisabledTableMove execute state=CREATE_TABLE_PRE_OPERATION 2023-07-22 07:10:49,213 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "Group_testDisabledTableMove" procId is: 126 2023-07-22 07:10:49,214 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.MasterRpcServices(1230): Checking to see if procedure is done pid=126 2023-07-22 07:10:49,216 DEBUG [PEWorker-1] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-22 07:10:49,216 DEBUG [PEWorker-1] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testDisabledTableMove_2099860274 2023-07-22 07:10:49,217 DEBUG [PEWorker-1] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-22 07:10:49,217 DEBUG [PEWorker-1] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-22 07:10:49,219 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=126, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=Group_testDisabledTableMove execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-22 07:10:49,222 DEBUG [HFileArchiver-5] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/.tmp/data/default/Group_testDisabledTableMove/fa12030a82b1d949852c920011e93393 2023-07-22 07:10:49,222 DEBUG [HFileArchiver-6] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/.tmp/data/default/Group_testDisabledTableMove/0f801282908ccebd1a4fbf6fa10bd6e4 2023-07-22 07:10:49,223 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/.tmp/data/default/Group_testDisabledTableMove/ec5e98752207f7cd41a61dccc56a9b21 2023-07-22 07:10:49,223 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/.tmp/data/default/Group_testDisabledTableMove/e6958e318122c76e0916f49bfb7be53f 2023-07-22 07:10:49,222 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/.tmp/data/default/Group_testDisabledTableMove/a41d2628cc1d5d01529447f49eb5dcb5 2023-07-22 07:10:49,223 DEBUG [HFileArchiver-5] backup.HFileArchiver(153): Directory hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/.tmp/data/default/Group_testDisabledTableMove/fa12030a82b1d949852c920011e93393 empty. 2023-07-22 07:10:49,223 DEBUG [HFileArchiver-2] backup.HFileArchiver(153): Directory hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/.tmp/data/default/Group_testDisabledTableMove/a41d2628cc1d5d01529447f49eb5dcb5 empty. 2023-07-22 07:10:49,223 DEBUG [HFileArchiver-1] backup.HFileArchiver(153): Directory hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/.tmp/data/default/Group_testDisabledTableMove/ec5e98752207f7cd41a61dccc56a9b21 empty. 2023-07-22 07:10:49,224 DEBUG [HFileArchiver-3] backup.HFileArchiver(153): Directory hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/.tmp/data/default/Group_testDisabledTableMove/e6958e318122c76e0916f49bfb7be53f empty. 2023-07-22 07:10:49,224 DEBUG [HFileArchiver-6] backup.HFileArchiver(153): Directory hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/.tmp/data/default/Group_testDisabledTableMove/0f801282908ccebd1a4fbf6fa10bd6e4 empty. 2023-07-22 07:10:49,224 DEBUG [HFileArchiver-5] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/.tmp/data/default/Group_testDisabledTableMove/fa12030a82b1d949852c920011e93393 2023-07-22 07:10:49,224 DEBUG [HFileArchiver-1] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/.tmp/data/default/Group_testDisabledTableMove/ec5e98752207f7cd41a61dccc56a9b21 2023-07-22 07:10:49,224 DEBUG [HFileArchiver-2] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/.tmp/data/default/Group_testDisabledTableMove/a41d2628cc1d5d01529447f49eb5dcb5 2023-07-22 07:10:49,224 DEBUG [HFileArchiver-6] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/.tmp/data/default/Group_testDisabledTableMove/0f801282908ccebd1a4fbf6fa10bd6e4 2023-07-22 07:10:49,224 DEBUG [HFileArchiver-3] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/.tmp/data/default/Group_testDisabledTableMove/e6958e318122c76e0916f49bfb7be53f 2023-07-22 07:10:49,225 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(328): Archived Group_testDisabledTableMove regions 2023-07-22 07:10:49,250 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/.tmp/data/default/Group_testDisabledTableMove/.tabledesc/.tableinfo.0000000001 2023-07-22 07:10:49,252 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(7675): creating {ENCODED => 0f801282908ccebd1a4fbf6fa10bd6e4, NAME => 'Group_testDisabledTableMove,i\xBF\x14i\xBE,1690009849210.0f801282908ccebd1a4fbf6fa10bd6e4.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'}, tableDescriptor='Group_testDisabledTableMove', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/.tmp 2023-07-22 07:10:49,253 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(7675): creating {ENCODED => fa12030a82b1d949852c920011e93393, NAME => 'Group_testDisabledTableMove,,1690009849210.fa12030a82b1d949852c920011e93393.', STARTKEY => '', ENDKEY => 'aaaaa'}, tableDescriptor='Group_testDisabledTableMove', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/.tmp 2023-07-22 07:10:49,254 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(7675): creating {ENCODED => a41d2628cc1d5d01529447f49eb5dcb5, NAME => 'Group_testDisabledTableMove,aaaaa,1690009849210.a41d2628cc1d5d01529447f49eb5dcb5.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'}, tableDescriptor='Group_testDisabledTableMove', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/.tmp 2023-07-22 07:10:49,315 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.MasterRpcServices(1230): Checking to see if procedure is done pid=126 2023-07-22 07:10:49,361 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,aaaaa,1690009849210.a41d2628cc1d5d01529447f49eb5dcb5.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-22 07:10:49,361 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1604): Closing a41d2628cc1d5d01529447f49eb5dcb5, disabling compactions & flushes 2023-07-22 07:10:49,361 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,aaaaa,1690009849210.a41d2628cc1d5d01529447f49eb5dcb5. 2023-07-22 07:10:49,361 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,aaaaa,1690009849210.a41d2628cc1d5d01529447f49eb5dcb5. 2023-07-22 07:10:49,362 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,aaaaa,1690009849210.a41d2628cc1d5d01529447f49eb5dcb5. after waiting 0 ms 2023-07-22 07:10:49,362 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,aaaaa,1690009849210.a41d2628cc1d5d01529447f49eb5dcb5. 2023-07-22 07:10:49,362 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,aaaaa,1690009849210.a41d2628cc1d5d01529447f49eb5dcb5. 2023-07-22 07:10:49,362 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1558): Region close journal for a41d2628cc1d5d01529447f49eb5dcb5: 2023-07-22 07:10:49,362 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,,1690009849210.fa12030a82b1d949852c920011e93393.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-22 07:10:49,362 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(7675): creating {ENCODED => ec5e98752207f7cd41a61dccc56a9b21, NAME => 'Group_testDisabledTableMove,r\x1C\xC7r\x1B,1690009849210.ec5e98752207f7cd41a61dccc56a9b21.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'}, tableDescriptor='Group_testDisabledTableMove', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/.tmp 2023-07-22 07:10:49,362 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1604): Closing fa12030a82b1d949852c920011e93393, disabling compactions & flushes 2023-07-22 07:10:49,363 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,,1690009849210.fa12030a82b1d949852c920011e93393. 2023-07-22 07:10:49,363 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,,1690009849210.fa12030a82b1d949852c920011e93393. 2023-07-22 07:10:49,363 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,,1690009849210.fa12030a82b1d949852c920011e93393. after waiting 0 ms 2023-07-22 07:10:49,363 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,,1690009849210.fa12030a82b1d949852c920011e93393. 2023-07-22 07:10:49,363 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,,1690009849210.fa12030a82b1d949852c920011e93393. 2023-07-22 07:10:49,363 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1558): Region close journal for fa12030a82b1d949852c920011e93393: 2023-07-22 07:10:49,363 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(7675): creating {ENCODED => e6958e318122c76e0916f49bfb7be53f, NAME => 'Group_testDisabledTableMove,zzzzz,1690009849210.e6958e318122c76e0916f49bfb7be53f.', STARTKEY => 'zzzzz', ENDKEY => ''}, tableDescriptor='Group_testDisabledTableMove', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/.tmp 2023-07-22 07:10:49,364 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,i\xBF\x14i\xBE,1690009849210.0f801282908ccebd1a4fbf6fa10bd6e4.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-22 07:10:49,364 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1604): Closing 0f801282908ccebd1a4fbf6fa10bd6e4, disabling compactions & flushes 2023-07-22 07:10:49,364 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,i\xBF\x14i\xBE,1690009849210.0f801282908ccebd1a4fbf6fa10bd6e4. 2023-07-22 07:10:49,364 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,i\xBF\x14i\xBE,1690009849210.0f801282908ccebd1a4fbf6fa10bd6e4. 2023-07-22 07:10:49,364 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,i\xBF\x14i\xBE,1690009849210.0f801282908ccebd1a4fbf6fa10bd6e4. after waiting 0 ms 2023-07-22 07:10:49,364 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,i\xBF\x14i\xBE,1690009849210.0f801282908ccebd1a4fbf6fa10bd6e4. 2023-07-22 07:10:49,364 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,i\xBF\x14i\xBE,1690009849210.0f801282908ccebd1a4fbf6fa10bd6e4. 2023-07-22 07:10:49,365 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1558): Region close journal for 0f801282908ccebd1a4fbf6fa10bd6e4: 2023-07-22 07:10:49,399 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,zzzzz,1690009849210.e6958e318122c76e0916f49bfb7be53f.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-22 07:10:49,399 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1604): Closing e6958e318122c76e0916f49bfb7be53f, disabling compactions & flushes 2023-07-22 07:10:49,399 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,zzzzz,1690009849210.e6958e318122c76e0916f49bfb7be53f. 2023-07-22 07:10:49,399 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,zzzzz,1690009849210.e6958e318122c76e0916f49bfb7be53f. 2023-07-22 07:10:49,399 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,zzzzz,1690009849210.e6958e318122c76e0916f49bfb7be53f. after waiting 0 ms 2023-07-22 07:10:49,399 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,zzzzz,1690009849210.e6958e318122c76e0916f49bfb7be53f. 2023-07-22 07:10:49,399 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,zzzzz,1690009849210.e6958e318122c76e0916f49bfb7be53f. 2023-07-22 07:10:49,400 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1558): Region close journal for e6958e318122c76e0916f49bfb7be53f: 2023-07-22 07:10:49,401 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,r\x1C\xC7r\x1B,1690009849210.ec5e98752207f7cd41a61dccc56a9b21.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-22 07:10:49,401 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1604): Closing ec5e98752207f7cd41a61dccc56a9b21, disabling compactions & flushes 2023-07-22 07:10:49,401 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,r\x1C\xC7r\x1B,1690009849210.ec5e98752207f7cd41a61dccc56a9b21. 2023-07-22 07:10:49,401 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,r\x1C\xC7r\x1B,1690009849210.ec5e98752207f7cd41a61dccc56a9b21. 2023-07-22 07:10:49,401 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,r\x1C\xC7r\x1B,1690009849210.ec5e98752207f7cd41a61dccc56a9b21. after waiting 0 ms 2023-07-22 07:10:49,401 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,r\x1C\xC7r\x1B,1690009849210.ec5e98752207f7cd41a61dccc56a9b21. 2023-07-22 07:10:49,401 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,r\x1C\xC7r\x1B,1690009849210.ec5e98752207f7cd41a61dccc56a9b21. 2023-07-22 07:10:49,401 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1558): Region close journal for ec5e98752207f7cd41a61dccc56a9b21: 2023-07-22 07:10:49,404 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=126, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=Group_testDisabledTableMove execute state=CREATE_TABLE_ADD_TO_META 2023-07-22 07:10:49,405 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,aaaaa,1690009849210.a41d2628cc1d5d01529447f49eb5dcb5.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1690009849404"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690009849404"}]},"ts":"1690009849404"} 2023-07-22 07:10:49,405 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,,1690009849210.fa12030a82b1d949852c920011e93393.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1690009849404"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690009849404"}]},"ts":"1690009849404"} 2023-07-22 07:10:49,405 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,i\\xBF\\x14i\\xBE,1690009849210.0f801282908ccebd1a4fbf6fa10bd6e4.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1690009849404"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690009849404"}]},"ts":"1690009849404"} 2023-07-22 07:10:49,405 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,zzzzz,1690009849210.e6958e318122c76e0916f49bfb7be53f.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1690009849404"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690009849404"}]},"ts":"1690009849404"} 2023-07-22 07:10:49,405 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,r\\x1C\\xC7r\\x1B,1690009849210.ec5e98752207f7cd41a61dccc56a9b21.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1690009849404"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690009849404"}]},"ts":"1690009849404"} 2023-07-22 07:10:49,407 INFO [PEWorker-1] hbase.MetaTableAccessor(1496): Added 5 regions to meta. 2023-07-22 07:10:49,408 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=126, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=Group_testDisabledTableMove execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-22 07:10:49,409 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testDisabledTableMove","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690009849408"}]},"ts":"1690009849408"} 2023-07-22 07:10:49,410 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=Group_testDisabledTableMove, state=ENABLING in hbase:meta 2023-07-22 07:10:49,414 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-22 07:10:49,414 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-22 07:10:49,414 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-22 07:10:49,414 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-22 07:10:49,415 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=127, ppid=126, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=fa12030a82b1d949852c920011e93393, ASSIGN}, {pid=128, ppid=126, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=a41d2628cc1d5d01529447f49eb5dcb5, ASSIGN}, {pid=129, ppid=126, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=0f801282908ccebd1a4fbf6fa10bd6e4, ASSIGN}, {pid=130, ppid=126, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=ec5e98752207f7cd41a61dccc56a9b21, ASSIGN}, {pid=131, ppid=126, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=e6958e318122c76e0916f49bfb7be53f, ASSIGN}] 2023-07-22 07:10:49,417 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=128, ppid=126, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=a41d2628cc1d5d01529447f49eb5dcb5, ASSIGN 2023-07-22 07:10:49,417 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=127, ppid=126, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=fa12030a82b1d949852c920011e93393, ASSIGN 2023-07-22 07:10:49,417 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=129, ppid=126, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=0f801282908ccebd1a4fbf6fa10bd6e4, ASSIGN 2023-07-22 07:10:49,418 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=130, ppid=126, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=ec5e98752207f7cd41a61dccc56a9b21, ASSIGN 2023-07-22 07:10:49,418 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=128, ppid=126, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=a41d2628cc1d5d01529447f49eb5dcb5, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,39057,1690009825637; forceNewPlan=false, retain=false 2023-07-22 07:10:49,419 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=127, ppid=126, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=fa12030a82b1d949852c920011e93393, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,41787,1690009825478; forceNewPlan=false, retain=false 2023-07-22 07:10:49,419 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=129, ppid=126, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=0f801282908ccebd1a4fbf6fa10bd6e4, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,39057,1690009825637; forceNewPlan=false, retain=false 2023-07-22 07:10:49,420 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=130, ppid=126, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=ec5e98752207f7cd41a61dccc56a9b21, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,39057,1690009825637; forceNewPlan=false, retain=false 2023-07-22 07:10:49,420 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=131, ppid=126, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=e6958e318122c76e0916f49bfb7be53f, ASSIGN 2023-07-22 07:10:49,421 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=131, ppid=126, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=e6958e318122c76e0916f49bfb7be53f, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,41787,1690009825478; forceNewPlan=false, retain=false 2023-07-22 07:10:49,505 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-hbase.properties,hadoop-metrics2.properties 2023-07-22 07:10:49,517 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.MasterRpcServices(1230): Checking to see if procedure is done pid=126 2023-07-22 07:10:49,569 INFO [jenkins-hbase4:37061] balancer.BaseLoadBalancer(1545): Reassigned 5 regions. 5 retained the pre-restart assignment. 2023-07-22 07:10:49,573 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=129 updating hbase:meta row=0f801282908ccebd1a4fbf6fa10bd6e4, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,39057,1690009825637 2023-07-22 07:10:49,573 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=128 updating hbase:meta row=a41d2628cc1d5d01529447f49eb5dcb5, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,39057,1690009825637 2023-07-22 07:10:49,574 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,i\\xBF\\x14i\\xBE,1690009849210.0f801282908ccebd1a4fbf6fa10bd6e4.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1690009849573"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690009849573"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690009849573"}]},"ts":"1690009849573"} 2023-07-22 07:10:49,573 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=130 updating hbase:meta row=ec5e98752207f7cd41a61dccc56a9b21, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,39057,1690009825637 2023-07-22 07:10:49,573 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=127 updating hbase:meta row=fa12030a82b1d949852c920011e93393, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,41787,1690009825478 2023-07-22 07:10:49,574 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,,1690009849210.fa12030a82b1d949852c920011e93393.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1690009849573"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690009849573"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690009849573"}]},"ts":"1690009849573"} 2023-07-22 07:10:49,573 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=131 updating hbase:meta row=e6958e318122c76e0916f49bfb7be53f, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,41787,1690009825478 2023-07-22 07:10:49,574 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,r\\x1C\\xC7r\\x1B,1690009849210.ec5e98752207f7cd41a61dccc56a9b21.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1690009849573"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690009849573"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690009849573"}]},"ts":"1690009849573"} 2023-07-22 07:10:49,574 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,zzzzz,1690009849210.e6958e318122c76e0916f49bfb7be53f.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1690009849573"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690009849573"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690009849573"}]},"ts":"1690009849573"} 2023-07-22 07:10:49,574 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,aaaaa,1690009849210.a41d2628cc1d5d01529447f49eb5dcb5.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1690009849573"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690009849573"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690009849573"}]},"ts":"1690009849573"} 2023-07-22 07:10:49,575 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=132, ppid=129, state=RUNNABLE; OpenRegionProcedure 0f801282908ccebd1a4fbf6fa10bd6e4, server=jenkins-hbase4.apache.org,39057,1690009825637}] 2023-07-22 07:10:49,577 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=133, ppid=127, state=RUNNABLE; OpenRegionProcedure fa12030a82b1d949852c920011e93393, server=jenkins-hbase4.apache.org,41787,1690009825478}] 2023-07-22 07:10:49,579 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=134, ppid=130, state=RUNNABLE; OpenRegionProcedure ec5e98752207f7cd41a61dccc56a9b21, server=jenkins-hbase4.apache.org,39057,1690009825637}] 2023-07-22 07:10:49,579 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=135, ppid=131, state=RUNNABLE; OpenRegionProcedure e6958e318122c76e0916f49bfb7be53f, server=jenkins-hbase4.apache.org,41787,1690009825478}] 2023-07-22 07:10:49,580 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=136, ppid=128, state=RUNNABLE; OpenRegionProcedure a41d2628cc1d5d01529447f49eb5dcb5, server=jenkins-hbase4.apache.org,39057,1690009825637}] 2023-07-22 07:10:49,732 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testDisabledTableMove,i\xBF\x14i\xBE,1690009849210.0f801282908ccebd1a4fbf6fa10bd6e4. 2023-07-22 07:10:49,732 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 0f801282908ccebd1a4fbf6fa10bd6e4, NAME => 'Group_testDisabledTableMove,i\xBF\x14i\xBE,1690009849210.0f801282908ccebd1a4fbf6fa10bd6e4.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'} 2023-07-22 07:10:49,733 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testDisabledTableMove 0f801282908ccebd1a4fbf6fa10bd6e4 2023-07-22 07:10:49,733 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,i\xBF\x14i\xBE,1690009849210.0f801282908ccebd1a4fbf6fa10bd6e4.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-22 07:10:49,733 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 0f801282908ccebd1a4fbf6fa10bd6e4 2023-07-22 07:10:49,733 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 0f801282908ccebd1a4fbf6fa10bd6e4 2023-07-22 07:10:49,734 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testDisabledTableMove,,1690009849210.fa12030a82b1d949852c920011e93393. 2023-07-22 07:10:49,734 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => fa12030a82b1d949852c920011e93393, NAME => 'Group_testDisabledTableMove,,1690009849210.fa12030a82b1d949852c920011e93393.', STARTKEY => '', ENDKEY => 'aaaaa'} 2023-07-22 07:10:49,735 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testDisabledTableMove fa12030a82b1d949852c920011e93393 2023-07-22 07:10:49,735 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,,1690009849210.fa12030a82b1d949852c920011e93393.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-22 07:10:49,735 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for fa12030a82b1d949852c920011e93393 2023-07-22 07:10:49,735 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for fa12030a82b1d949852c920011e93393 2023-07-22 07:10:49,736 INFO [StoreOpener-0f801282908ccebd1a4fbf6fa10bd6e4-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 0f801282908ccebd1a4fbf6fa10bd6e4 2023-07-22 07:10:49,736 INFO [StoreOpener-fa12030a82b1d949852c920011e93393-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region fa12030a82b1d949852c920011e93393 2023-07-22 07:10:49,738 DEBUG [StoreOpener-0f801282908ccebd1a4fbf6fa10bd6e4-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/data/default/Group_testDisabledTableMove/0f801282908ccebd1a4fbf6fa10bd6e4/f 2023-07-22 07:10:49,738 DEBUG [StoreOpener-0f801282908ccebd1a4fbf6fa10bd6e4-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/data/default/Group_testDisabledTableMove/0f801282908ccebd1a4fbf6fa10bd6e4/f 2023-07-22 07:10:49,738 DEBUG [StoreOpener-fa12030a82b1d949852c920011e93393-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/data/default/Group_testDisabledTableMove/fa12030a82b1d949852c920011e93393/f 2023-07-22 07:10:49,738 DEBUG [StoreOpener-fa12030a82b1d949852c920011e93393-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/data/default/Group_testDisabledTableMove/fa12030a82b1d949852c920011e93393/f 2023-07-22 07:10:49,739 INFO [StoreOpener-fa12030a82b1d949852c920011e93393-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region fa12030a82b1d949852c920011e93393 columnFamilyName f 2023-07-22 07:10:49,739 INFO [StoreOpener-0f801282908ccebd1a4fbf6fa10bd6e4-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 0f801282908ccebd1a4fbf6fa10bd6e4 columnFamilyName f 2023-07-22 07:10:49,739 INFO [StoreOpener-fa12030a82b1d949852c920011e93393-1] regionserver.HStore(310): Store=fa12030a82b1d949852c920011e93393/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-22 07:10:49,739 INFO [StoreOpener-0f801282908ccebd1a4fbf6fa10bd6e4-1] regionserver.HStore(310): Store=0f801282908ccebd1a4fbf6fa10bd6e4/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-22 07:10:49,740 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/data/default/Group_testDisabledTableMove/0f801282908ccebd1a4fbf6fa10bd6e4 2023-07-22 07:10:49,741 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/data/default/Group_testDisabledTableMove/0f801282908ccebd1a4fbf6fa10bd6e4 2023-07-22 07:10:49,744 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 0f801282908ccebd1a4fbf6fa10bd6e4 2023-07-22 07:10:49,747 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/data/default/Group_testDisabledTableMove/fa12030a82b1d949852c920011e93393 2023-07-22 07:10:49,747 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/data/default/Group_testDisabledTableMove/fa12030a82b1d949852c920011e93393 2023-07-22 07:10:49,748 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/data/default/Group_testDisabledTableMove/0f801282908ccebd1a4fbf6fa10bd6e4/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-22 07:10:49,749 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 0f801282908ccebd1a4fbf6fa10bd6e4; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10136676320, jitterRate=-0.05594845116138458}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-22 07:10:49,749 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 0f801282908ccebd1a4fbf6fa10bd6e4: 2023-07-22 07:10:49,750 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testDisabledTableMove,i\xBF\x14i\xBE,1690009849210.0f801282908ccebd1a4fbf6fa10bd6e4., pid=132, masterSystemTime=1690009849727 2023-07-22 07:10:49,750 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for fa12030a82b1d949852c920011e93393 2023-07-22 07:10:49,751 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testDisabledTableMove,i\xBF\x14i\xBE,1690009849210.0f801282908ccebd1a4fbf6fa10bd6e4. 2023-07-22 07:10:49,752 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testDisabledTableMove,i\xBF\x14i\xBE,1690009849210.0f801282908ccebd1a4fbf6fa10bd6e4. 2023-07-22 07:10:49,752 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testDisabledTableMove,r\x1C\xC7r\x1B,1690009849210.ec5e98752207f7cd41a61dccc56a9b21. 2023-07-22 07:10:49,752 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => ec5e98752207f7cd41a61dccc56a9b21, NAME => 'Group_testDisabledTableMove,r\x1C\xC7r\x1B,1690009849210.ec5e98752207f7cd41a61dccc56a9b21.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'} 2023-07-22 07:10:49,752 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testDisabledTableMove ec5e98752207f7cd41a61dccc56a9b21 2023-07-22 07:10:49,752 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=129 updating hbase:meta row=0f801282908ccebd1a4fbf6fa10bd6e4, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,39057,1690009825637 2023-07-22 07:10:49,752 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,r\x1C\xC7r\x1B,1690009849210.ec5e98752207f7cd41a61dccc56a9b21.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-22 07:10:49,752 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for ec5e98752207f7cd41a61dccc56a9b21 2023-07-22 07:10:49,752 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testDisabledTableMove,i\\xBF\\x14i\\xBE,1690009849210.0f801282908ccebd1a4fbf6fa10bd6e4.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1690009849752"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690009849752"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690009849752"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690009849752"}]},"ts":"1690009849752"} 2023-07-22 07:10:49,752 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for ec5e98752207f7cd41a61dccc56a9b21 2023-07-22 07:10:49,753 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/data/default/Group_testDisabledTableMove/fa12030a82b1d949852c920011e93393/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-22 07:10:49,753 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened fa12030a82b1d949852c920011e93393; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9437267840, jitterRate=-0.12108594179153442}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-22 07:10:49,754 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for fa12030a82b1d949852c920011e93393: 2023-07-22 07:10:49,754 INFO [StoreOpener-ec5e98752207f7cd41a61dccc56a9b21-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region ec5e98752207f7cd41a61dccc56a9b21 2023-07-22 07:10:49,754 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testDisabledTableMove,,1690009849210.fa12030a82b1d949852c920011e93393., pid=133, masterSystemTime=1690009849730 2023-07-22 07:10:49,756 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testDisabledTableMove,,1690009849210.fa12030a82b1d949852c920011e93393. 2023-07-22 07:10:49,756 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testDisabledTableMove,,1690009849210.fa12030a82b1d949852c920011e93393. 2023-07-22 07:10:49,756 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testDisabledTableMove,zzzzz,1690009849210.e6958e318122c76e0916f49bfb7be53f. 2023-07-22 07:10:49,756 DEBUG [StoreOpener-ec5e98752207f7cd41a61dccc56a9b21-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/data/default/Group_testDisabledTableMove/ec5e98752207f7cd41a61dccc56a9b21/f 2023-07-22 07:10:49,756 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => e6958e318122c76e0916f49bfb7be53f, NAME => 'Group_testDisabledTableMove,zzzzz,1690009849210.e6958e318122c76e0916f49bfb7be53f.', STARTKEY => 'zzzzz', ENDKEY => ''} 2023-07-22 07:10:49,756 DEBUG [StoreOpener-ec5e98752207f7cd41a61dccc56a9b21-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/data/default/Group_testDisabledTableMove/ec5e98752207f7cd41a61dccc56a9b21/f 2023-07-22 07:10:49,756 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=132, resume processing ppid=129 2023-07-22 07:10:49,757 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=132, ppid=129, state=SUCCESS; OpenRegionProcedure 0f801282908ccebd1a4fbf6fa10bd6e4, server=jenkins-hbase4.apache.org,39057,1690009825637 in 179 msec 2023-07-22 07:10:49,757 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testDisabledTableMove e6958e318122c76e0916f49bfb7be53f 2023-07-22 07:10:49,757 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,zzzzz,1690009849210.e6958e318122c76e0916f49bfb7be53f.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-22 07:10:49,757 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for e6958e318122c76e0916f49bfb7be53f 2023-07-22 07:10:49,757 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for e6958e318122c76e0916f49bfb7be53f 2023-07-22 07:10:49,757 INFO [StoreOpener-ec5e98752207f7cd41a61dccc56a9b21-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region ec5e98752207f7cd41a61dccc56a9b21 columnFamilyName f 2023-07-22 07:10:49,757 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=127 updating hbase:meta row=fa12030a82b1d949852c920011e93393, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,41787,1690009825478 2023-07-22 07:10:49,757 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testDisabledTableMove,,1690009849210.fa12030a82b1d949852c920011e93393.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1690009849757"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690009849757"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690009849757"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690009849757"}]},"ts":"1690009849757"} 2023-07-22 07:10:49,758 INFO [StoreOpener-ec5e98752207f7cd41a61dccc56a9b21-1] regionserver.HStore(310): Store=ec5e98752207f7cd41a61dccc56a9b21/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-22 07:10:49,758 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=129, ppid=126, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=0f801282908ccebd1a4fbf6fa10bd6e4, ASSIGN in 341 msec 2023-07-22 07:10:49,759 INFO [StoreOpener-e6958e318122c76e0916f49bfb7be53f-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region e6958e318122c76e0916f49bfb7be53f 2023-07-22 07:10:49,759 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/data/default/Group_testDisabledTableMove/ec5e98752207f7cd41a61dccc56a9b21 2023-07-22 07:10:49,759 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/data/default/Group_testDisabledTableMove/ec5e98752207f7cd41a61dccc56a9b21 2023-07-22 07:10:49,760 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=133, resume processing ppid=127 2023-07-22 07:10:49,760 DEBUG [StoreOpener-e6958e318122c76e0916f49bfb7be53f-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/data/default/Group_testDisabledTableMove/e6958e318122c76e0916f49bfb7be53f/f 2023-07-22 07:10:49,760 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=133, ppid=127, state=SUCCESS; OpenRegionProcedure fa12030a82b1d949852c920011e93393, server=jenkins-hbase4.apache.org,41787,1690009825478 in 182 msec 2023-07-22 07:10:49,760 DEBUG [StoreOpener-e6958e318122c76e0916f49bfb7be53f-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/data/default/Group_testDisabledTableMove/e6958e318122c76e0916f49bfb7be53f/f 2023-07-22 07:10:49,761 INFO [StoreOpener-e6958e318122c76e0916f49bfb7be53f-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region e6958e318122c76e0916f49bfb7be53f columnFamilyName f 2023-07-22 07:10:49,762 INFO [StoreOpener-e6958e318122c76e0916f49bfb7be53f-1] regionserver.HStore(310): Store=e6958e318122c76e0916f49bfb7be53f/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-22 07:10:49,762 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=127, ppid=126, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=fa12030a82b1d949852c920011e93393, ASSIGN in 345 msec 2023-07-22 07:10:49,762 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/data/default/Group_testDisabledTableMove/e6958e318122c76e0916f49bfb7be53f 2023-07-22 07:10:49,763 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for ec5e98752207f7cd41a61dccc56a9b21 2023-07-22 07:10:49,763 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/data/default/Group_testDisabledTableMove/e6958e318122c76e0916f49bfb7be53f 2023-07-22 07:10:49,765 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/data/default/Group_testDisabledTableMove/ec5e98752207f7cd41a61dccc56a9b21/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-22 07:10:49,765 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened ec5e98752207f7cd41a61dccc56a9b21; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11867556000, jitterRate=0.10525228083133698}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-22 07:10:49,765 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for ec5e98752207f7cd41a61dccc56a9b21: 2023-07-22 07:10:49,766 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for e6958e318122c76e0916f49bfb7be53f 2023-07-22 07:10:49,766 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testDisabledTableMove,r\x1C\xC7r\x1B,1690009849210.ec5e98752207f7cd41a61dccc56a9b21., pid=134, masterSystemTime=1690009849727 2023-07-22 07:10:49,767 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testDisabledTableMove,r\x1C\xC7r\x1B,1690009849210.ec5e98752207f7cd41a61dccc56a9b21. 2023-07-22 07:10:49,767 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testDisabledTableMove,r\x1C\xC7r\x1B,1690009849210.ec5e98752207f7cd41a61dccc56a9b21. 2023-07-22 07:10:49,768 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testDisabledTableMove,aaaaa,1690009849210.a41d2628cc1d5d01529447f49eb5dcb5. 2023-07-22 07:10:49,768 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => a41d2628cc1d5d01529447f49eb5dcb5, NAME => 'Group_testDisabledTableMove,aaaaa,1690009849210.a41d2628cc1d5d01529447f49eb5dcb5.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'} 2023-07-22 07:10:49,768 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=130 updating hbase:meta row=ec5e98752207f7cd41a61dccc56a9b21, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,39057,1690009825637 2023-07-22 07:10:49,768 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testDisabledTableMove,r\\x1C\\xC7r\\x1B,1690009849210.ec5e98752207f7cd41a61dccc56a9b21.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1690009849768"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690009849768"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690009849768"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690009849768"}]},"ts":"1690009849768"} 2023-07-22 07:10:49,768 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testDisabledTableMove a41d2628cc1d5d01529447f49eb5dcb5 2023-07-22 07:10:49,768 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,aaaaa,1690009849210.a41d2628cc1d5d01529447f49eb5dcb5.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-22 07:10:49,768 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for a41d2628cc1d5d01529447f49eb5dcb5 2023-07-22 07:10:49,768 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for a41d2628cc1d5d01529447f49eb5dcb5 2023-07-22 07:10:49,768 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/data/default/Group_testDisabledTableMove/e6958e318122c76e0916f49bfb7be53f/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-22 07:10:49,769 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened e6958e318122c76e0916f49bfb7be53f; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9835740480, jitterRate=-0.08397528529167175}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-22 07:10:49,769 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for e6958e318122c76e0916f49bfb7be53f: 2023-07-22 07:10:49,769 INFO [StoreOpener-a41d2628cc1d5d01529447f49eb5dcb5-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region a41d2628cc1d5d01529447f49eb5dcb5 2023-07-22 07:10:49,770 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testDisabledTableMove,zzzzz,1690009849210.e6958e318122c76e0916f49bfb7be53f., pid=135, masterSystemTime=1690009849730 2023-07-22 07:10:49,771 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testDisabledTableMove,zzzzz,1690009849210.e6958e318122c76e0916f49bfb7be53f. 2023-07-22 07:10:49,771 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testDisabledTableMove,zzzzz,1690009849210.e6958e318122c76e0916f49bfb7be53f. 2023-07-22 07:10:49,771 DEBUG [StoreOpener-a41d2628cc1d5d01529447f49eb5dcb5-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/data/default/Group_testDisabledTableMove/a41d2628cc1d5d01529447f49eb5dcb5/f 2023-07-22 07:10:49,771 DEBUG [StoreOpener-a41d2628cc1d5d01529447f49eb5dcb5-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/data/default/Group_testDisabledTableMove/a41d2628cc1d5d01529447f49eb5dcb5/f 2023-07-22 07:10:49,771 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=134, resume processing ppid=130 2023-07-22 07:10:49,771 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=131 updating hbase:meta row=e6958e318122c76e0916f49bfb7be53f, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,41787,1690009825478 2023-07-22 07:10:49,772 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testDisabledTableMove,zzzzz,1690009849210.e6958e318122c76e0916f49bfb7be53f.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1690009849771"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690009849771"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690009849771"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690009849771"}]},"ts":"1690009849771"} 2023-07-22 07:10:49,772 INFO [StoreOpener-a41d2628cc1d5d01529447f49eb5dcb5-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region a41d2628cc1d5d01529447f49eb5dcb5 columnFamilyName f 2023-07-22 07:10:49,771 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=134, ppid=130, state=SUCCESS; OpenRegionProcedure ec5e98752207f7cd41a61dccc56a9b21, server=jenkins-hbase4.apache.org,39057,1690009825637 in 190 msec 2023-07-22 07:10:49,772 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=130, ppid=126, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=ec5e98752207f7cd41a61dccc56a9b21, ASSIGN in 356 msec 2023-07-22 07:10:49,772 INFO [StoreOpener-a41d2628cc1d5d01529447f49eb5dcb5-1] regionserver.HStore(310): Store=a41d2628cc1d5d01529447f49eb5dcb5/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-22 07:10:49,773 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/data/default/Group_testDisabledTableMove/a41d2628cc1d5d01529447f49eb5dcb5 2023-07-22 07:10:49,774 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/data/default/Group_testDisabledTableMove/a41d2628cc1d5d01529447f49eb5dcb5 2023-07-22 07:10:49,774 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=135, resume processing ppid=131 2023-07-22 07:10:49,774 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=135, ppid=131, state=SUCCESS; OpenRegionProcedure e6958e318122c76e0916f49bfb7be53f, server=jenkins-hbase4.apache.org,41787,1690009825478 in 194 msec 2023-07-22 07:10:49,775 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=131, ppid=126, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=e6958e318122c76e0916f49bfb7be53f, ASSIGN in 359 msec 2023-07-22 07:10:49,777 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for a41d2628cc1d5d01529447f49eb5dcb5 2023-07-22 07:10:49,779 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/data/default/Group_testDisabledTableMove/a41d2628cc1d5d01529447f49eb5dcb5/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-22 07:10:49,779 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened a41d2628cc1d5d01529447f49eb5dcb5; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9406631040, jitterRate=-0.12393921613693237}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-22 07:10:49,779 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for a41d2628cc1d5d01529447f49eb5dcb5: 2023-07-22 07:10:49,780 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testDisabledTableMove,aaaaa,1690009849210.a41d2628cc1d5d01529447f49eb5dcb5., pid=136, masterSystemTime=1690009849727 2023-07-22 07:10:49,781 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testDisabledTableMove,aaaaa,1690009849210.a41d2628cc1d5d01529447f49eb5dcb5. 2023-07-22 07:10:49,781 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testDisabledTableMove,aaaaa,1690009849210.a41d2628cc1d5d01529447f49eb5dcb5. 2023-07-22 07:10:49,782 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=128 updating hbase:meta row=a41d2628cc1d5d01529447f49eb5dcb5, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,39057,1690009825637 2023-07-22 07:10:49,782 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testDisabledTableMove,aaaaa,1690009849210.a41d2628cc1d5d01529447f49eb5dcb5.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1690009849782"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690009849782"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690009849782"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690009849782"}]},"ts":"1690009849782"} 2023-07-22 07:10:49,784 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=136, resume processing ppid=128 2023-07-22 07:10:49,784 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=136, ppid=128, state=SUCCESS; OpenRegionProcedure a41d2628cc1d5d01529447f49eb5dcb5, server=jenkins-hbase4.apache.org,39057,1690009825637 in 203 msec 2023-07-22 07:10:49,791 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=128, resume processing ppid=126 2023-07-22 07:10:49,791 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=128, ppid=126, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=a41d2628cc1d5d01529447f49eb5dcb5, ASSIGN in 369 msec 2023-07-22 07:10:49,792 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=126, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=Group_testDisabledTableMove execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-22 07:10:49,792 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testDisabledTableMove","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690009849792"}]},"ts":"1690009849792"} 2023-07-22 07:10:49,793 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=Group_testDisabledTableMove, state=ENABLED in hbase:meta 2023-07-22 07:10:49,795 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=126, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=Group_testDisabledTableMove execute state=CREATE_TABLE_POST_OPERATION 2023-07-22 07:10:49,797 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=126, state=SUCCESS; CreateTableProcedure table=Group_testDisabledTableMove in 585 msec 2023-07-22 07:10:49,818 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.MasterRpcServices(1230): Checking to see if procedure is done pid=126 2023-07-22 07:10:49,819 INFO [Listener at localhost/46507] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:Group_testDisabledTableMove, procId: 126 completed 2023-07-22 07:10:49,819 DEBUG [Listener at localhost/46507] hbase.HBaseTestingUtility(3430): Waiting until all regions of table Group_testDisabledTableMove get assigned. Timeout = 60000ms 2023-07-22 07:10:49,819 INFO [Listener at localhost/46507] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-22 07:10:49,823 INFO [Listener at localhost/46507] hbase.HBaseTestingUtility(3484): All regions for table Group_testDisabledTableMove assigned to meta. Checking AM states. 2023-07-22 07:10:49,823 INFO [Listener at localhost/46507] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-22 07:10:49,823 INFO [Listener at localhost/46507] hbase.HBaseTestingUtility(3504): All regions for table Group_testDisabledTableMove assigned. 2023-07-22 07:10:49,824 INFO [Listener at localhost/46507] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-22 07:10:49,829 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=Group_testDisabledTableMove 2023-07-22 07:10:49,830 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-22 07:10:49,830 INFO [Listener at localhost/46507] client.HBaseAdmin$15(890): Started disable of Group_testDisabledTableMove 2023-07-22 07:10:49,830 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable Group_testDisabledTableMove 2023-07-22 07:10:49,831 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] procedure2.ProcedureExecutor(1029): Stored pid=137, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=Group_testDisabledTableMove 2023-07-22 07:10:49,834 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.MasterRpcServices(1230): Checking to see if procedure is done pid=137 2023-07-22 07:10:49,834 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testDisabledTableMove","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690009849834"}]},"ts":"1690009849834"} 2023-07-22 07:10:49,835 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=Group_testDisabledTableMove, state=DISABLING in hbase:meta 2023-07-22 07:10:49,837 INFO [PEWorker-3] procedure.DisableTableProcedure(293): Set Group_testDisabledTableMove to state=DISABLING 2023-07-22 07:10:49,839 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=138, ppid=137, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=fa12030a82b1d949852c920011e93393, UNASSIGN}, {pid=139, ppid=137, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=a41d2628cc1d5d01529447f49eb5dcb5, UNASSIGN}, {pid=140, ppid=137, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=0f801282908ccebd1a4fbf6fa10bd6e4, UNASSIGN}, {pid=141, ppid=137, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=ec5e98752207f7cd41a61dccc56a9b21, UNASSIGN}, {pid=142, ppid=137, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=e6958e318122c76e0916f49bfb7be53f, UNASSIGN}] 2023-07-22 07:10:49,841 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=138, ppid=137, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=fa12030a82b1d949852c920011e93393, UNASSIGN 2023-07-22 07:10:49,841 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=142, ppid=137, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=e6958e318122c76e0916f49bfb7be53f, UNASSIGN 2023-07-22 07:10:49,841 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=140, ppid=137, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=0f801282908ccebd1a4fbf6fa10bd6e4, UNASSIGN 2023-07-22 07:10:49,842 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=139, ppid=137, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=a41d2628cc1d5d01529447f49eb5dcb5, UNASSIGN 2023-07-22 07:10:49,842 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=141, ppid=137, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=ec5e98752207f7cd41a61dccc56a9b21, UNASSIGN 2023-07-22 07:10:49,844 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=138 updating hbase:meta row=fa12030a82b1d949852c920011e93393, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,41787,1690009825478 2023-07-22 07:10:49,844 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,,1690009849210.fa12030a82b1d949852c920011e93393.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1690009849844"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690009849844"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690009849844"}]},"ts":"1690009849844"} 2023-07-22 07:10:49,844 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=142 updating hbase:meta row=e6958e318122c76e0916f49bfb7be53f, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,41787,1690009825478 2023-07-22 07:10:49,844 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=140 updating hbase:meta row=0f801282908ccebd1a4fbf6fa10bd6e4, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,39057,1690009825637 2023-07-22 07:10:49,844 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,zzzzz,1690009849210.e6958e318122c76e0916f49bfb7be53f.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1690009849844"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690009849844"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690009849844"}]},"ts":"1690009849844"} 2023-07-22 07:10:49,844 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,i\\xBF\\x14i\\xBE,1690009849210.0f801282908ccebd1a4fbf6fa10bd6e4.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1690009849844"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690009849844"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690009849844"}]},"ts":"1690009849844"} 2023-07-22 07:10:49,845 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=139 updating hbase:meta row=a41d2628cc1d5d01529447f49eb5dcb5, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,39057,1690009825637 2023-07-22 07:10:49,845 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,aaaaa,1690009849210.a41d2628cc1d5d01529447f49eb5dcb5.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1690009849845"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690009849845"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690009849845"}]},"ts":"1690009849845"} 2023-07-22 07:10:49,845 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=141 updating hbase:meta row=ec5e98752207f7cd41a61dccc56a9b21, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,39057,1690009825637 2023-07-22 07:10:49,845 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,r\\x1C\\xC7r\\x1B,1690009849210.ec5e98752207f7cd41a61dccc56a9b21.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1690009849845"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690009849845"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690009849845"}]},"ts":"1690009849845"} 2023-07-22 07:10:49,846 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=143, ppid=138, state=RUNNABLE; CloseRegionProcedure fa12030a82b1d949852c920011e93393, server=jenkins-hbase4.apache.org,41787,1690009825478}] 2023-07-22 07:10:49,852 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=144, ppid=142, state=RUNNABLE; CloseRegionProcedure e6958e318122c76e0916f49bfb7be53f, server=jenkins-hbase4.apache.org,41787,1690009825478}] 2023-07-22 07:10:49,853 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=145, ppid=140, state=RUNNABLE; CloseRegionProcedure 0f801282908ccebd1a4fbf6fa10bd6e4, server=jenkins-hbase4.apache.org,39057,1690009825637}] 2023-07-22 07:10:49,854 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=146, ppid=139, state=RUNNABLE; CloseRegionProcedure a41d2628cc1d5d01529447f49eb5dcb5, server=jenkins-hbase4.apache.org,39057,1690009825637}] 2023-07-22 07:10:49,855 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=147, ppid=141, state=RUNNABLE; CloseRegionProcedure ec5e98752207f7cd41a61dccc56a9b21, server=jenkins-hbase4.apache.org,39057,1690009825637}] 2023-07-22 07:10:49,935 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.MasterRpcServices(1230): Checking to see if procedure is done pid=137 2023-07-22 07:10:49,999 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close fa12030a82b1d949852c920011e93393 2023-07-22 07:10:50,000 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing fa12030a82b1d949852c920011e93393, disabling compactions & flushes 2023-07-22 07:10:50,000 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,,1690009849210.fa12030a82b1d949852c920011e93393. 2023-07-22 07:10:50,000 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,,1690009849210.fa12030a82b1d949852c920011e93393. 2023-07-22 07:10:50,000 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,,1690009849210.fa12030a82b1d949852c920011e93393. after waiting 0 ms 2023-07-22 07:10:50,000 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,,1690009849210.fa12030a82b1d949852c920011e93393. 2023-07-22 07:10:50,004 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/data/default/Group_testDisabledTableMove/fa12030a82b1d949852c920011e93393/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-22 07:10:50,004 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,,1690009849210.fa12030a82b1d949852c920011e93393. 2023-07-22 07:10:50,004 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for fa12030a82b1d949852c920011e93393: 2023-07-22 07:10:50,005 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed fa12030a82b1d949852c920011e93393 2023-07-22 07:10:50,006 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close e6958e318122c76e0916f49bfb7be53f 2023-07-22 07:10:50,007 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing e6958e318122c76e0916f49bfb7be53f, disabling compactions & flushes 2023-07-22 07:10:50,007 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,zzzzz,1690009849210.e6958e318122c76e0916f49bfb7be53f. 2023-07-22 07:10:50,007 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=138 updating hbase:meta row=fa12030a82b1d949852c920011e93393, regionState=CLOSED 2023-07-22 07:10:50,007 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,zzzzz,1690009849210.e6958e318122c76e0916f49bfb7be53f. 2023-07-22 07:10:50,007 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,zzzzz,1690009849210.e6958e318122c76e0916f49bfb7be53f. after waiting 0 ms 2023-07-22 07:10:50,007 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,zzzzz,1690009849210.e6958e318122c76e0916f49bfb7be53f. 2023-07-22 07:10:50,007 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,,1690009849210.fa12030a82b1d949852c920011e93393.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1690009850007"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690009850007"}]},"ts":"1690009850007"} 2023-07-22 07:10:50,007 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close a41d2628cc1d5d01529447f49eb5dcb5 2023-07-22 07:10:50,008 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing a41d2628cc1d5d01529447f49eb5dcb5, disabling compactions & flushes 2023-07-22 07:10:50,008 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,aaaaa,1690009849210.a41d2628cc1d5d01529447f49eb5dcb5. 2023-07-22 07:10:50,008 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,aaaaa,1690009849210.a41d2628cc1d5d01529447f49eb5dcb5. 2023-07-22 07:10:50,008 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,aaaaa,1690009849210.a41d2628cc1d5d01529447f49eb5dcb5. after waiting 0 ms 2023-07-22 07:10:50,008 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,aaaaa,1690009849210.a41d2628cc1d5d01529447f49eb5dcb5. 2023-07-22 07:10:50,011 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=143, resume processing ppid=138 2023-07-22 07:10:50,011 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/data/default/Group_testDisabledTableMove/e6958e318122c76e0916f49bfb7be53f/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-22 07:10:50,011 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=143, ppid=138, state=SUCCESS; CloseRegionProcedure fa12030a82b1d949852c920011e93393, server=jenkins-hbase4.apache.org,41787,1690009825478 in 163 msec 2023-07-22 07:10:50,012 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/data/default/Group_testDisabledTableMove/a41d2628cc1d5d01529447f49eb5dcb5/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-22 07:10:50,012 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,zzzzz,1690009849210.e6958e318122c76e0916f49bfb7be53f. 2023-07-22 07:10:50,012 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for e6958e318122c76e0916f49bfb7be53f: 2023-07-22 07:10:50,013 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,aaaaa,1690009849210.a41d2628cc1d5d01529447f49eb5dcb5. 2023-07-22 07:10:50,013 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for a41d2628cc1d5d01529447f49eb5dcb5: 2023-07-22 07:10:50,013 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=138, ppid=137, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=fa12030a82b1d949852c920011e93393, UNASSIGN in 172 msec 2023-07-22 07:10:50,014 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed e6958e318122c76e0916f49bfb7be53f 2023-07-22 07:10:50,014 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=142 updating hbase:meta row=e6958e318122c76e0916f49bfb7be53f, regionState=CLOSED 2023-07-22 07:10:50,014 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,zzzzz,1690009849210.e6958e318122c76e0916f49bfb7be53f.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1690009850014"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690009850014"}]},"ts":"1690009850014"} 2023-07-22 07:10:50,015 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed a41d2628cc1d5d01529447f49eb5dcb5 2023-07-22 07:10:50,015 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close ec5e98752207f7cd41a61dccc56a9b21 2023-07-22 07:10:50,016 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing ec5e98752207f7cd41a61dccc56a9b21, disabling compactions & flushes 2023-07-22 07:10:50,016 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=139 updating hbase:meta row=a41d2628cc1d5d01529447f49eb5dcb5, regionState=CLOSED 2023-07-22 07:10:50,016 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,r\x1C\xC7r\x1B,1690009849210.ec5e98752207f7cd41a61dccc56a9b21. 2023-07-22 07:10:50,016 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,r\x1C\xC7r\x1B,1690009849210.ec5e98752207f7cd41a61dccc56a9b21. 2023-07-22 07:10:50,016 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,aaaaa,1690009849210.a41d2628cc1d5d01529447f49eb5dcb5.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1690009850015"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690009850015"}]},"ts":"1690009850015"} 2023-07-22 07:10:50,016 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,r\x1C\xC7r\x1B,1690009849210.ec5e98752207f7cd41a61dccc56a9b21. after waiting 0 ms 2023-07-22 07:10:50,017 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,r\x1C\xC7r\x1B,1690009849210.ec5e98752207f7cd41a61dccc56a9b21. 2023-07-22 07:10:50,020 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=144, resume processing ppid=142 2023-07-22 07:10:50,020 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=146, resume processing ppid=139 2023-07-22 07:10:50,020 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=144, ppid=142, state=SUCCESS; CloseRegionProcedure e6958e318122c76e0916f49bfb7be53f, server=jenkins-hbase4.apache.org,41787,1690009825478 in 166 msec 2023-07-22 07:10:50,020 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=146, ppid=139, state=SUCCESS; CloseRegionProcedure a41d2628cc1d5d01529447f49eb5dcb5, server=jenkins-hbase4.apache.org,39057,1690009825637 in 164 msec 2023-07-22 07:10:50,021 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=142, ppid=137, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=e6958e318122c76e0916f49bfb7be53f, UNASSIGN in 181 msec 2023-07-22 07:10:50,021 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=139, ppid=137, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=a41d2628cc1d5d01529447f49eb5dcb5, UNASSIGN in 181 msec 2023-07-22 07:10:50,025 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/data/default/Group_testDisabledTableMove/ec5e98752207f7cd41a61dccc56a9b21/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-22 07:10:50,025 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,r\x1C\xC7r\x1B,1690009849210.ec5e98752207f7cd41a61dccc56a9b21. 2023-07-22 07:10:50,026 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for ec5e98752207f7cd41a61dccc56a9b21: 2023-07-22 07:10:50,027 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed ec5e98752207f7cd41a61dccc56a9b21 2023-07-22 07:10:50,027 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 0f801282908ccebd1a4fbf6fa10bd6e4 2023-07-22 07:10:50,028 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 0f801282908ccebd1a4fbf6fa10bd6e4, disabling compactions & flushes 2023-07-22 07:10:50,028 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,i\xBF\x14i\xBE,1690009849210.0f801282908ccebd1a4fbf6fa10bd6e4. 2023-07-22 07:10:50,028 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,i\xBF\x14i\xBE,1690009849210.0f801282908ccebd1a4fbf6fa10bd6e4. 2023-07-22 07:10:50,028 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,i\xBF\x14i\xBE,1690009849210.0f801282908ccebd1a4fbf6fa10bd6e4. after waiting 0 ms 2023-07-22 07:10:50,028 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,i\xBF\x14i\xBE,1690009849210.0f801282908ccebd1a4fbf6fa10bd6e4. 2023-07-22 07:10:50,029 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=141 updating hbase:meta row=ec5e98752207f7cd41a61dccc56a9b21, regionState=CLOSED 2023-07-22 07:10:50,029 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,r\\x1C\\xC7r\\x1B,1690009849210.ec5e98752207f7cd41a61dccc56a9b21.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1690009850029"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690009850029"}]},"ts":"1690009850029"} 2023-07-22 07:10:50,032 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=147, resume processing ppid=141 2023-07-22 07:10:50,032 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=147, ppid=141, state=SUCCESS; CloseRegionProcedure ec5e98752207f7cd41a61dccc56a9b21, server=jenkins-hbase4.apache.org,39057,1690009825637 in 175 msec 2023-07-22 07:10:50,033 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/data/default/Group_testDisabledTableMove/0f801282908ccebd1a4fbf6fa10bd6e4/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-22 07:10:50,033 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,i\xBF\x14i\xBE,1690009849210.0f801282908ccebd1a4fbf6fa10bd6e4. 2023-07-22 07:10:50,033 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=141, ppid=137, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=ec5e98752207f7cd41a61dccc56a9b21, UNASSIGN in 193 msec 2023-07-22 07:10:50,033 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 0f801282908ccebd1a4fbf6fa10bd6e4: 2023-07-22 07:10:50,035 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 0f801282908ccebd1a4fbf6fa10bd6e4 2023-07-22 07:10:50,035 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=140 updating hbase:meta row=0f801282908ccebd1a4fbf6fa10bd6e4, regionState=CLOSED 2023-07-22 07:10:50,036 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,i\\xBF\\x14i\\xBE,1690009849210.0f801282908ccebd1a4fbf6fa10bd6e4.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1690009850035"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690009850035"}]},"ts":"1690009850035"} 2023-07-22 07:10:50,039 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=145, resume processing ppid=140 2023-07-22 07:10:50,039 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=145, ppid=140, state=SUCCESS; CloseRegionProcedure 0f801282908ccebd1a4fbf6fa10bd6e4, server=jenkins-hbase4.apache.org,39057,1690009825637 in 184 msec 2023-07-22 07:10:50,041 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=140, resume processing ppid=137 2023-07-22 07:10:50,041 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=140, ppid=137, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=0f801282908ccebd1a4fbf6fa10bd6e4, UNASSIGN in 200 msec 2023-07-22 07:10:50,041 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testDisabledTableMove","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690009850041"}]},"ts":"1690009850041"} 2023-07-22 07:10:50,043 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=Group_testDisabledTableMove, state=DISABLED in hbase:meta 2023-07-22 07:10:50,046 INFO [PEWorker-4] procedure.DisableTableProcedure(305): Set Group_testDisabledTableMove to state=DISABLED 2023-07-22 07:10:50,048 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=137, state=SUCCESS; DisableTableProcedure table=Group_testDisabledTableMove in 215 msec 2023-07-22 07:10:50,136 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.MasterRpcServices(1230): Checking to see if procedure is done pid=137 2023-07-22 07:10:50,136 INFO [Listener at localhost/46507] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:Group_testDisabledTableMove, procId: 137 completed 2023-07-22 07:10:50,136 INFO [Listener at localhost/46507] rsgroup.TestRSGroupsAdmin1(370): Moving table Group_testDisabledTableMove to Group_testDisabledTableMove_2099860274 2023-07-22 07:10:50,138 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [Group_testDisabledTableMove] to rsgroup Group_testDisabledTableMove_2099860274 2023-07-22 07:10:50,140 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-22 07:10:50,140 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testDisabledTableMove_2099860274 2023-07-22 07:10:50,140 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-22 07:10:50,141 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-22 07:10:50,142 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminServer(336): Skipping move regions because the table Group_testDisabledTableMove is disabled 2023-07-22 07:10:50,142 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group Group_testDisabledTableMove_2099860274, current retry=0 2023-07-22 07:10:50,142 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminServer(369): All regions from table(s) [Group_testDisabledTableMove] moved to target group Group_testDisabledTableMove_2099860274. 2023-07-22 07:10:50,142 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-22 07:10:50,144 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-22 07:10:50,144 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-22 07:10:50,147 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=Group_testDisabledTableMove 2023-07-22 07:10:50,147 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-22 07:10:50,148 INFO [Listener at localhost/46507] client.HBaseAdmin$15(890): Started disable of Group_testDisabledTableMove 2023-07-22 07:10:50,149 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable Group_testDisabledTableMove 2023-07-22 07:10:50,149 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.TableNotEnabledException: Group_testDisabledTableMove at org.apache.hadoop.hbase.master.procedure.AbstractStateMachineTableProcedure.preflightChecks(AbstractStateMachineTableProcedure.java:163) at org.apache.hadoop.hbase.master.procedure.DisableTableProcedure.(DisableTableProcedure.java:78) at org.apache.hadoop.hbase.master.HMaster$11.run(HMaster.java:2429) at org.apache.hadoop.hbase.master.procedure.MasterProcedureUtil.submitProcedure(MasterProcedureUtil.java:132) at org.apache.hadoop.hbase.master.HMaster.disableTable(HMaster.java:2413) at org.apache.hadoop.hbase.master.MasterRpcServices.disableTable(MasterRpcServices.java:787) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-22 07:10:50,149 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] ipc.CallRunner(144): callId: 923 service: MasterService methodName: DisableTable size: 88 connection: 172.31.14.131:38908 deadline: 1690009910149, exception=org.apache.hadoop.hbase.TableNotEnabledException: Group_testDisabledTableMove 2023-07-22 07:10:50,150 DEBUG [Listener at localhost/46507] hbase.HBaseTestingUtility(1826): Table: Group_testDisabledTableMove already disabled, so just deleting it. 2023-07-22 07:10:50,150 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.HMaster$5(2228): Client=jenkins//172.31.14.131 delete Group_testDisabledTableMove 2023-07-22 07:10:50,151 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] procedure2.ProcedureExecutor(1029): Stored pid=149, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=Group_testDisabledTableMove 2023-07-22 07:10:50,153 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=149, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=Group_testDisabledTableMove 2023-07-22 07:10:50,153 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 'Group_testDisabledTableMove' from rsgroup 'Group_testDisabledTableMove_2099860274' 2023-07-22 07:10:50,154 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=149, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=Group_testDisabledTableMove 2023-07-22 07:10:50,155 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-22 07:10:50,155 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testDisabledTableMove_2099860274 2023-07-22 07:10:50,156 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-22 07:10:50,156 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-22 07:10:50,160 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/.tmp/data/default/Group_testDisabledTableMove/fa12030a82b1d949852c920011e93393 2023-07-22 07:10:50,161 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/.tmp/data/default/Group_testDisabledTableMove/e6958e318122c76e0916f49bfb7be53f 2023-07-22 07:10:50,161 DEBUG [HFileArchiver-7] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/.tmp/data/default/Group_testDisabledTableMove/a41d2628cc1d5d01529447f49eb5dcb5 2023-07-22 07:10:50,161 DEBUG [HFileArchiver-5] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/.tmp/data/default/Group_testDisabledTableMove/ec5e98752207f7cd41a61dccc56a9b21 2023-07-22 07:10:50,161 DEBUG [HFileArchiver-8] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/.tmp/data/default/Group_testDisabledTableMove/0f801282908ccebd1a4fbf6fa10bd6e4 2023-07-22 07:10:50,162 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.MasterRpcServices(1230): Checking to see if procedure is done pid=149 2023-07-22 07:10:50,163 DEBUG [HFileArchiver-7] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/.tmp/data/default/Group_testDisabledTableMove/a41d2628cc1d5d01529447f49eb5dcb5/f, FileablePath, hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/.tmp/data/default/Group_testDisabledTableMove/a41d2628cc1d5d01529447f49eb5dcb5/recovered.edits] 2023-07-22 07:10:50,163 DEBUG [HFileArchiver-4] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/.tmp/data/default/Group_testDisabledTableMove/fa12030a82b1d949852c920011e93393/f, FileablePath, hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/.tmp/data/default/Group_testDisabledTableMove/fa12030a82b1d949852c920011e93393/recovered.edits] 2023-07-22 07:10:50,164 DEBUG [HFileArchiver-8] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/.tmp/data/default/Group_testDisabledTableMove/0f801282908ccebd1a4fbf6fa10bd6e4/f, FileablePath, hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/.tmp/data/default/Group_testDisabledTableMove/0f801282908ccebd1a4fbf6fa10bd6e4/recovered.edits] 2023-07-22 07:10:50,164 DEBUG [HFileArchiver-1] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/.tmp/data/default/Group_testDisabledTableMove/e6958e318122c76e0916f49bfb7be53f/f, FileablePath, hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/.tmp/data/default/Group_testDisabledTableMove/e6958e318122c76e0916f49bfb7be53f/recovered.edits] 2023-07-22 07:10:50,164 DEBUG [HFileArchiver-5] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/.tmp/data/default/Group_testDisabledTableMove/ec5e98752207f7cd41a61dccc56a9b21/f, FileablePath, hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/.tmp/data/default/Group_testDisabledTableMove/ec5e98752207f7cd41a61dccc56a9b21/recovered.edits] 2023-07-22 07:10:50,172 DEBUG [HFileArchiver-4] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/.tmp/data/default/Group_testDisabledTableMove/fa12030a82b1d949852c920011e93393/recovered.edits/4.seqid to hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/archive/data/default/Group_testDisabledTableMove/fa12030a82b1d949852c920011e93393/recovered.edits/4.seqid 2023-07-22 07:10:50,173 DEBUG [HFileArchiver-1] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/.tmp/data/default/Group_testDisabledTableMove/e6958e318122c76e0916f49bfb7be53f/recovered.edits/4.seqid to hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/archive/data/default/Group_testDisabledTableMove/e6958e318122c76e0916f49bfb7be53f/recovered.edits/4.seqid 2023-07-22 07:10:50,173 DEBUG [HFileArchiver-5] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/.tmp/data/default/Group_testDisabledTableMove/ec5e98752207f7cd41a61dccc56a9b21/recovered.edits/4.seqid to hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/archive/data/default/Group_testDisabledTableMove/ec5e98752207f7cd41a61dccc56a9b21/recovered.edits/4.seqid 2023-07-22 07:10:50,173 DEBUG [HFileArchiver-4] backup.HFileArchiver(596): Deleted hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/.tmp/data/default/Group_testDisabledTableMove/fa12030a82b1d949852c920011e93393 2023-07-22 07:10:50,173 DEBUG [HFileArchiver-8] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/.tmp/data/default/Group_testDisabledTableMove/0f801282908ccebd1a4fbf6fa10bd6e4/recovered.edits/4.seqid to hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/archive/data/default/Group_testDisabledTableMove/0f801282908ccebd1a4fbf6fa10bd6e4/recovered.edits/4.seqid 2023-07-22 07:10:50,174 DEBUG [HFileArchiver-7] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/.tmp/data/default/Group_testDisabledTableMove/a41d2628cc1d5d01529447f49eb5dcb5/recovered.edits/4.seqid to hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/archive/data/default/Group_testDisabledTableMove/a41d2628cc1d5d01529447f49eb5dcb5/recovered.edits/4.seqid 2023-07-22 07:10:50,174 DEBUG [HFileArchiver-1] backup.HFileArchiver(596): Deleted hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/.tmp/data/default/Group_testDisabledTableMove/e6958e318122c76e0916f49bfb7be53f 2023-07-22 07:10:50,174 DEBUG [HFileArchiver-8] backup.HFileArchiver(596): Deleted hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/.tmp/data/default/Group_testDisabledTableMove/0f801282908ccebd1a4fbf6fa10bd6e4 2023-07-22 07:10:50,174 DEBUG [HFileArchiver-7] backup.HFileArchiver(596): Deleted hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/.tmp/data/default/Group_testDisabledTableMove/a41d2628cc1d5d01529447f49eb5dcb5 2023-07-22 07:10:50,174 DEBUG [HFileArchiver-5] backup.HFileArchiver(596): Deleted hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/.tmp/data/default/Group_testDisabledTableMove/ec5e98752207f7cd41a61dccc56a9b21 2023-07-22 07:10:50,174 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived Group_testDisabledTableMove regions 2023-07-22 07:10:50,177 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=149, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=Group_testDisabledTableMove 2023-07-22 07:10:50,179 WARN [PEWorker-3] procedure.DeleteTableProcedure(384): Deleting some vestigial 5 rows of Group_testDisabledTableMove from hbase:meta 2023-07-22 07:10:50,185 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(421): Removing 'Group_testDisabledTableMove' descriptor. 2023-07-22 07:10:50,186 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=149, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=Group_testDisabledTableMove 2023-07-22 07:10:50,186 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(411): Removing 'Group_testDisabledTableMove' from region states. 2023-07-22 07:10:50,186 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testDisabledTableMove,,1690009849210.fa12030a82b1d949852c920011e93393.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1690009850186"}]},"ts":"9223372036854775807"} 2023-07-22 07:10:50,186 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testDisabledTableMove,aaaaa,1690009849210.a41d2628cc1d5d01529447f49eb5dcb5.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1690009850186"}]},"ts":"9223372036854775807"} 2023-07-22 07:10:50,186 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testDisabledTableMove,i\\xBF\\x14i\\xBE,1690009849210.0f801282908ccebd1a4fbf6fa10bd6e4.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1690009850186"}]},"ts":"9223372036854775807"} 2023-07-22 07:10:50,186 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testDisabledTableMove,r\\x1C\\xC7r\\x1B,1690009849210.ec5e98752207f7cd41a61dccc56a9b21.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1690009850186"}]},"ts":"9223372036854775807"} 2023-07-22 07:10:50,186 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testDisabledTableMove,zzzzz,1690009849210.e6958e318122c76e0916f49bfb7be53f.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1690009850186"}]},"ts":"9223372036854775807"} 2023-07-22 07:10:50,188 INFO [PEWorker-3] hbase.MetaTableAccessor(1788): Deleted 5 regions from META 2023-07-22 07:10:50,188 DEBUG [PEWorker-3] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => fa12030a82b1d949852c920011e93393, NAME => 'Group_testDisabledTableMove,,1690009849210.fa12030a82b1d949852c920011e93393.', STARTKEY => '', ENDKEY => 'aaaaa'}, {ENCODED => a41d2628cc1d5d01529447f49eb5dcb5, NAME => 'Group_testDisabledTableMove,aaaaa,1690009849210.a41d2628cc1d5d01529447f49eb5dcb5.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'}, {ENCODED => 0f801282908ccebd1a4fbf6fa10bd6e4, NAME => 'Group_testDisabledTableMove,i\xBF\x14i\xBE,1690009849210.0f801282908ccebd1a4fbf6fa10bd6e4.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'}, {ENCODED => ec5e98752207f7cd41a61dccc56a9b21, NAME => 'Group_testDisabledTableMove,r\x1C\xC7r\x1B,1690009849210.ec5e98752207f7cd41a61dccc56a9b21.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'}, {ENCODED => e6958e318122c76e0916f49bfb7be53f, NAME => 'Group_testDisabledTableMove,zzzzz,1690009849210.e6958e318122c76e0916f49bfb7be53f.', STARTKEY => 'zzzzz', ENDKEY => ''}] 2023-07-22 07:10:50,188 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(415): Marking 'Group_testDisabledTableMove' as deleted. 2023-07-22 07:10:50,188 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testDisabledTableMove","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1690009850188"}]},"ts":"9223372036854775807"} 2023-07-22 07:10:50,189 INFO [PEWorker-3] hbase.MetaTableAccessor(1658): Deleted table Group_testDisabledTableMove state from META 2023-07-22 07:10:50,192 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(130): Finished pid=149, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=Group_testDisabledTableMove 2023-07-22 07:10:50,193 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=149, state=SUCCESS; DeleteTableProcedure table=Group_testDisabledTableMove in 42 msec 2023-07-22 07:10:50,263 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.MasterRpcServices(1230): Checking to see if procedure is done pid=149 2023-07-22 07:10:50,263 INFO [Listener at localhost/46507] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: default:Group_testDisabledTableMove, procId: 149 completed 2023-07-22 07:10:50,266 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-22 07:10:50,266 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-22 07:10:50,267 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-22 07:10:50,267 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-22 07:10:50,267 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-22 07:10:50,268 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:34133, jenkins-hbase4.apache.org:33357] to rsgroup default 2023-07-22 07:10:50,270 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-22 07:10:50,270 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testDisabledTableMove_2099860274 2023-07-22 07:10:50,270 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-22 07:10:50,270 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-22 07:10:50,272 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group Group_testDisabledTableMove_2099860274, current retry=0 2023-07-22 07:10:50,272 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,33357,1690009829125, jenkins-hbase4.apache.org,34133,1690009825283] are moved back to Group_testDisabledTableMove_2099860274 2023-07-22 07:10:50,272 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminServer(438): Move servers done: Group_testDisabledTableMove_2099860274 => default 2023-07-22 07:10:50,272 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-22 07:10:50,272 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup Group_testDisabledTableMove_2099860274 2023-07-22 07:10:50,275 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-22 07:10:50,275 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-22 07:10:50,276 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-22 07:10:50,277 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-22 07:10:50,277 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-22 07:10:50,277 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-22 07:10:50,277 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-22 07:10:50,278 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-22 07:10:50,278 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-22 07:10:50,279 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-22 07:10:50,281 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-22 07:10:50,281 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-22 07:10:50,283 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-22 07:10:50,286 INFO [Listener at localhost/46507] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-22 07:10:50,286 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-22 07:10:50,288 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-22 07:10:50,288 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-22 07:10:50,289 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-22 07:10:50,290 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-22 07:10:50,292 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-22 07:10:50,292 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-22 07:10:50,293 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:37061] to rsgroup master 2023-07-22 07:10:50,294 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37061 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-22 07:10:50,294 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] ipc.CallRunner(144): callId: 957 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:38908 deadline: 1690011050293, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37061 is either offline or it does not exist. 2023-07-22 07:10:50,294 WARN [Listener at localhost/46507] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37061 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37061 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-22 07:10:50,296 INFO [Listener at localhost/46507] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-22 07:10:50,296 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-22 07:10:50,296 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-22 07:10:50,297 INFO [Listener at localhost/46507] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:33357, jenkins-hbase4.apache.org:34133, jenkins-hbase4.apache.org:39057, jenkins-hbase4.apache.org:41787], Tables:[hbase:meta, hbase:namespace, unmovedTable, hbase:rsgroup, testRename], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-22 07:10:50,297 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-22 07:10:50,297 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-22 07:10:50,316 INFO [Listener at localhost/46507] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testDisabledTableMove Thread=510 (was 507) Potentially hanging thread: hconnection-0x2e79eb29-shared-pool-24 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_12167090_17 at /127.0.0.1:52618 [Waiting for operation #3] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x422d8bf2-shared-pool-7 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1563634625_17 at /127.0.0.1:39648 [Waiting for operation #6] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=793 (was 778) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=378 (was 378), ProcessCount=180 (was 180), AvailableMemoryMB=6575 (was 6645) 2023-07-22 07:10:50,316 WARN [Listener at localhost/46507] hbase.ResourceChecker(130): Thread=510 is superior to 500 2023-07-22 07:10:50,334 INFO [Listener at localhost/46507] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testRSGroupListDoesNotContainFailedTableCreation Thread=510, OpenFileDescriptor=793, MaxFileDescriptor=60000, SystemLoadAverage=378, ProcessCount=180, AvailableMemoryMB=6574 2023-07-22 07:10:50,334 WARN [Listener at localhost/46507] hbase.ResourceChecker(130): Thread=510 is superior to 500 2023-07-22 07:10:50,334 INFO [Listener at localhost/46507] rsgroup.TestRSGroupsBase(132): testRSGroupListDoesNotContainFailedTableCreation 2023-07-22 07:10:50,338 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-22 07:10:50,338 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-22 07:10:50,338 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-22 07:10:50,338 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-22 07:10:50,338 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-22 07:10:50,339 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-22 07:10:50,339 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-22 07:10:50,340 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-22 07:10:50,343 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-22 07:10:50,343 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-22 07:10:50,344 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-22 07:10:50,347 INFO [Listener at localhost/46507] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-22 07:10:50,348 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-22 07:10:50,350 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-22 07:10:50,350 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-22 07:10:50,352 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-22 07:10:50,360 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-22 07:10:50,362 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-22 07:10:50,362 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-22 07:10:50,364 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:37061] to rsgroup master 2023-07-22 07:10:50,364 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37061 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-22 07:10:50,364 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] ipc.CallRunner(144): callId: 985 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:38908 deadline: 1690011050364, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37061 is either offline or it does not exist. 2023-07-22 07:10:50,365 WARN [Listener at localhost/46507] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37061 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37061 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-22 07:10:50,366 INFO [Listener at localhost/46507] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-22 07:10:50,367 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-22 07:10:50,367 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-22 07:10:50,367 INFO [Listener at localhost/46507] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:33357, jenkins-hbase4.apache.org:34133, jenkins-hbase4.apache.org:39057, jenkins-hbase4.apache.org:41787], Tables:[hbase:meta, hbase:namespace, unmovedTable, hbase:rsgroup, testRename], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-22 07:10:50,368 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-22 07:10:50,368 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37061] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-22 07:10:50,368 INFO [Listener at localhost/46507] hbase.HBaseTestingUtility(1286): Shutting down minicluster 2023-07-22 07:10:50,369 INFO [Listener at localhost/46507] client.ConnectionImplementation(1979): Closing master protocol: MasterService 2023-07-22 07:10:50,369 DEBUG [Listener at localhost/46507] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x4b32111a to 127.0.0.1:56256 2023-07-22 07:10:50,369 DEBUG [Listener at localhost/46507] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-22 07:10:50,370 DEBUG [Listener at localhost/46507] util.JVMClusterUtil(237): Shutting down HBase Cluster 2023-07-22 07:10:50,370 DEBUG [Listener at localhost/46507] util.JVMClusterUtil(257): Found active master hash=164398762, stopped=false 2023-07-22 07:10:50,370 DEBUG [Listener at localhost/46507] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint 2023-07-22 07:10:50,370 DEBUG [Listener at localhost/46507] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver 2023-07-22 07:10:50,371 INFO [Listener at localhost/46507] master.ServerManager(901): Cluster shutdown requested of master=jenkins-hbase4.apache.org,37061,1690009823266 2023-07-22 07:10:50,372 DEBUG [Listener at localhost/46507-EventThread] zookeeper.ZKWatcher(600): regionserver:34133-0x1018bdde7740001, quorum=127.0.0.1:56256, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-22 07:10:50,372 DEBUG [Listener at localhost/46507-EventThread] zookeeper.ZKWatcher(600): regionserver:41787-0x1018bdde7740002, quorum=127.0.0.1:56256, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-22 07:10:50,372 DEBUG [Listener at localhost/46507-EventThread] zookeeper.ZKWatcher(600): master:37061-0x1018bdde7740000, quorum=127.0.0.1:56256, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-22 07:10:50,372 DEBUG [Listener at localhost/46507-EventThread] zookeeper.ZKWatcher(600): regionserver:33357-0x1018bdde774000b, quorum=127.0.0.1:56256, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-22 07:10:50,372 DEBUG [Listener at localhost/46507-EventThread] zookeeper.ZKWatcher(600): regionserver:39057-0x1018bdde7740003, quorum=127.0.0.1:56256, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-22 07:10:50,372 DEBUG [Listener at localhost/46507-EventThread] zookeeper.ZKWatcher(600): master:37061-0x1018bdde7740000, quorum=127.0.0.1:56256, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-22 07:10:50,372 INFO [Listener at localhost/46507] procedure2.ProcedureExecutor(629): Stopping 2023-07-22 07:10:50,373 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:41787-0x1018bdde7740002, quorum=127.0.0.1:56256, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-22 07:10:50,373 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:37061-0x1018bdde7740000, quorum=127.0.0.1:56256, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-22 07:10:50,373 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:39057-0x1018bdde7740003, quorum=127.0.0.1:56256, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-22 07:10:50,373 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:34133-0x1018bdde7740001, quorum=127.0.0.1:56256, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-22 07:10:50,373 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:33357-0x1018bdde774000b, quorum=127.0.0.1:56256, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-22 07:10:50,373 DEBUG [Listener at localhost/46507] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x653060c4 to 127.0.0.1:56256 2023-07-22 07:10:50,373 DEBUG [Listener at localhost/46507] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-22 07:10:50,373 INFO [Listener at localhost/46507] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,34133,1690009825283' ***** 2023-07-22 07:10:50,373 INFO [Listener at localhost/46507] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-22 07:10:50,373 INFO [Listener at localhost/46507] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,41787,1690009825478' ***** 2023-07-22 07:10:50,374 INFO [Listener at localhost/46507] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-22 07:10:50,374 INFO [Listener at localhost/46507] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,39057,1690009825637' ***** 2023-07-22 07:10:50,374 INFO [Listener at localhost/46507] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-22 07:10:50,374 INFO [Listener at localhost/46507] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,33357,1690009829125' ***** 2023-07-22 07:10:50,374 INFO [Listener at localhost/46507] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-22 07:10:50,374 INFO [RS:0;jenkins-hbase4:34133] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-22 07:10:50,374 INFO [RS:3;jenkins-hbase4:33357] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-22 07:10:50,374 INFO [RS:2;jenkins-hbase4:39057] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-22 07:10:50,374 INFO [RS:1;jenkins-hbase4:41787] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-22 07:10:50,389 INFO [RS:1;jenkins-hbase4:41787] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@67816748{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-22 07:10:50,389 INFO [RS:2;jenkins-hbase4:39057] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@38a30dd7{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-22 07:10:50,389 INFO [RS:0;jenkins-hbase4:34133] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@40f2000c{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-22 07:10:50,389 INFO [RS:3;jenkins-hbase4:33357] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@3b6361e2{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-22 07:10:50,392 INFO [RS:0;jenkins-hbase4:34133] server.AbstractConnector(383): Stopped ServerConnector@22a36f37{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-22 07:10:50,392 INFO [RS:2;jenkins-hbase4:39057] server.AbstractConnector(383): Stopped ServerConnector@16240f3c{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-22 07:10:50,392 INFO [RS:1;jenkins-hbase4:41787] server.AbstractConnector(383): Stopped ServerConnector@1ba8dae2{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-22 07:10:50,392 INFO [RS:3;jenkins-hbase4:33357] server.AbstractConnector(383): Stopped ServerConnector@702b003c{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-22 07:10:50,393 INFO [RS:1;jenkins-hbase4:41787] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-22 07:10:50,393 INFO [RS:2;jenkins-hbase4:39057] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-22 07:10:50,393 INFO [RS:0;jenkins-hbase4:34133] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-22 07:10:50,393 INFO [RS:3;jenkins-hbase4:33357] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-22 07:10:50,394 INFO [RS:1;jenkins-hbase4:41787] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@46770358{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-22 07:10:50,395 INFO [RS:3;jenkins-hbase4:33357] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@2474d7bd{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-22 07:10:50,394 INFO [RS:2;jenkins-hbase4:39057] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@7431440f{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-22 07:10:50,395 INFO [RS:0;jenkins-hbase4:34133] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@5c80b18{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-22 07:10:50,396 INFO [RS:1;jenkins-hbase4:41787] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@64f476b1{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1e6bc595-35d7-277c-1036-cd06713ba4c7/hadoop.log.dir/,STOPPED} 2023-07-22 07:10:50,397 INFO [RS:3;jenkins-hbase4:33357] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@726126ec{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1e6bc595-35d7-277c-1036-cd06713ba4c7/hadoop.log.dir/,STOPPED} 2023-07-22 07:10:50,399 INFO [RS:0;jenkins-hbase4:34133] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@1c6f9d30{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1e6bc595-35d7-277c-1036-cd06713ba4c7/hadoop.log.dir/,STOPPED} 2023-07-22 07:10:50,399 INFO [RS:2;jenkins-hbase4:39057] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@145f3cb8{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1e6bc595-35d7-277c-1036-cd06713ba4c7/hadoop.log.dir/,STOPPED} 2023-07-22 07:10:50,400 INFO [RS:2;jenkins-hbase4:39057] regionserver.HeapMemoryManager(220): Stopping 2023-07-22 07:10:50,400 INFO [RS:2;jenkins-hbase4:39057] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-22 07:10:50,400 INFO [RS:2;jenkins-hbase4:39057] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-22 07:10:50,400 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-22 07:10:50,401 INFO [RS:2;jenkins-hbase4:39057] regionserver.HRegionServer(3305): Received CLOSE for d1da522f8583716887d9f5a3f6b36be8 2023-07-22 07:10:50,401 INFO [RS:2;jenkins-hbase4:39057] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,39057,1690009825637 2023-07-22 07:10:50,401 DEBUG [RS:2;jenkins-hbase4:39057] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x3626d634 to 127.0.0.1:56256 2023-07-22 07:10:50,401 DEBUG [RS:2;jenkins-hbase4:39057] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-22 07:10:50,401 INFO [RS:2;jenkins-hbase4:39057] regionserver.HRegionServer(1474): Waiting on 1 regions to close 2023-07-22 07:10:50,402 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing d1da522f8583716887d9f5a3f6b36be8, disabling compactions & flushes 2023-07-22 07:10:50,402 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region testRename,,1690009843576.d1da522f8583716887d9f5a3f6b36be8. 2023-07-22 07:10:50,402 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on testRename,,1690009843576.d1da522f8583716887d9f5a3f6b36be8. 2023-07-22 07:10:50,402 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on testRename,,1690009843576.d1da522f8583716887d9f5a3f6b36be8. after waiting 0 ms 2023-07-22 07:10:50,402 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region testRename,,1690009843576.d1da522f8583716887d9f5a3f6b36be8. 2023-07-22 07:10:50,403 INFO [RS:1;jenkins-hbase4:41787] regionserver.HeapMemoryManager(220): Stopping 2023-07-22 07:10:50,403 INFO [RS:0;jenkins-hbase4:34133] regionserver.HeapMemoryManager(220): Stopping 2023-07-22 07:10:50,403 DEBUG [RS:2;jenkins-hbase4:39057] regionserver.HRegionServer(1478): Online Regions={d1da522f8583716887d9f5a3f6b36be8=testRename,,1690009843576.d1da522f8583716887d9f5a3f6b36be8.} 2023-07-22 07:10:50,404 DEBUG [RS:2;jenkins-hbase4:39057] regionserver.HRegionServer(1504): Waiting on d1da522f8583716887d9f5a3f6b36be8 2023-07-22 07:10:50,404 INFO [RS:0;jenkins-hbase4:34133] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-22 07:10:50,404 INFO [RS:0;jenkins-hbase4:34133] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-22 07:10:50,404 INFO [RS:0;jenkins-hbase4:34133] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,34133,1690009825283 2023-07-22 07:10:50,404 DEBUG [RS:0;jenkins-hbase4:34133] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x1406d513 to 127.0.0.1:56256 2023-07-22 07:10:50,404 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-22 07:10:50,404 INFO [RS:3;jenkins-hbase4:33357] regionserver.HeapMemoryManager(220): Stopping 2023-07-22 07:10:50,404 INFO [RS:3;jenkins-hbase4:33357] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-22 07:10:50,404 INFO [RS:3;jenkins-hbase4:33357] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-22 07:10:50,404 INFO [RS:3;jenkins-hbase4:33357] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,33357,1690009829125 2023-07-22 07:10:50,404 DEBUG [RS:3;jenkins-hbase4:33357] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x13fbea4d to 127.0.0.1:56256 2023-07-22 07:10:50,404 DEBUG [RS:3;jenkins-hbase4:33357] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-22 07:10:50,405 INFO [RS:3;jenkins-hbase4:33357] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,33357,1690009829125; all regions closed. 2023-07-22 07:10:50,404 DEBUG [RS:0;jenkins-hbase4:34133] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-22 07:10:50,406 INFO [RS:0;jenkins-hbase4:34133] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,34133,1690009825283; all regions closed. 2023-07-22 07:10:50,406 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-22 07:10:50,404 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-22 07:10:50,406 INFO [RS:1;jenkins-hbase4:41787] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-22 07:10:50,406 INFO [RS:1;jenkins-hbase4:41787] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-22 07:10:50,406 INFO [RS:1;jenkins-hbase4:41787] regionserver.HRegionServer(3305): Received CLOSE for 5e60efed13e0136703971d9f91095391 2023-07-22 07:10:50,406 INFO [RS:1;jenkins-hbase4:41787] regionserver.HRegionServer(3305): Received CLOSE for b6b55dee15272a98eb856a00e0a41f50 2023-07-22 07:10:50,406 INFO [RS:1;jenkins-hbase4:41787] regionserver.HRegionServer(3305): Received CLOSE for ab3ddd109495a2bae7ed1c5a746f16d4 2023-07-22 07:10:50,406 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 5e60efed13e0136703971d9f91095391, disabling compactions & flushes 2023-07-22 07:10:50,406 INFO [RS:1;jenkins-hbase4:41787] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,41787,1690009825478 2023-07-22 07:10:50,406 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1690009828258.5e60efed13e0136703971d9f91095391. 2023-07-22 07:10:50,406 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1690009828258.5e60efed13e0136703971d9f91095391. 2023-07-22 07:10:50,406 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1690009828258.5e60efed13e0136703971d9f91095391. after waiting 0 ms 2023-07-22 07:10:50,406 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1690009828258.5e60efed13e0136703971d9f91095391. 2023-07-22 07:10:50,406 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 5e60efed13e0136703971d9f91095391 1/1 column families, dataSize=78 B heapSize=488 B 2023-07-22 07:10:50,407 DEBUG [RS:1;jenkins-hbase4:41787] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x7c415f8f to 127.0.0.1:56256 2023-07-22 07:10:50,407 DEBUG [RS:1;jenkins-hbase4:41787] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-22 07:10:50,407 INFO [RS:1;jenkins-hbase4:41787] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-22 07:10:50,407 INFO [RS:1;jenkins-hbase4:41787] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-22 07:10:50,407 INFO [RS:1;jenkins-hbase4:41787] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-22 07:10:50,407 INFO [RS:1;jenkins-hbase4:41787] regionserver.HRegionServer(3305): Received CLOSE for 1588230740 2023-07-22 07:10:50,407 INFO [RS:1;jenkins-hbase4:41787] regionserver.HRegionServer(1474): Waiting on 4 regions to close 2023-07-22 07:10:50,407 DEBUG [RS:1;jenkins-hbase4:41787] regionserver.HRegionServer(1478): Online Regions={5e60efed13e0136703971d9f91095391=hbase:namespace,,1690009828258.5e60efed13e0136703971d9f91095391., b6b55dee15272a98eb856a00e0a41f50=unmovedTable,,1690009845235.b6b55dee15272a98eb856a00e0a41f50., 1588230740=hbase:meta,,1.1588230740, ab3ddd109495a2bae7ed1c5a746f16d4=hbase:rsgroup,,1690009828408.ab3ddd109495a2bae7ed1c5a746f16d4.} 2023-07-22 07:10:50,408 DEBUG [RS:1;jenkins-hbase4:41787] regionserver.HRegionServer(1504): Waiting on 1588230740, 5e60efed13e0136703971d9f91095391, ab3ddd109495a2bae7ed1c5a746f16d4, b6b55dee15272a98eb856a00e0a41f50 2023-07-22 07:10:50,408 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-22 07:10:50,408 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-22 07:10:50,408 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-22 07:10:50,408 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-22 07:10:50,408 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-22 07:10:50,408 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=36.31 KB heapSize=59.22 KB 2023-07-22 07:10:50,410 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/data/default/testRename/d1da522f8583716887d9f5a3f6b36be8/recovered.edits/10.seqid, newMaxSeqId=10, maxSeqId=7 2023-07-22 07:10:50,412 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed testRename,,1690009843576.d1da522f8583716887d9f5a3f6b36be8. 2023-07-22 07:10:50,412 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for d1da522f8583716887d9f5a3f6b36be8: 2023-07-22 07:10:50,412 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed testRename,,1690009843576.d1da522f8583716887d9f5a3f6b36be8. 2023-07-22 07:10:50,433 DEBUG [RS:0;jenkins-hbase4:34133] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/oldWALs 2023-07-22 07:10:50,434 INFO [RS:0;jenkins-hbase4:34133] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C34133%2C1690009825283:(num 1690009827702) 2023-07-22 07:10:50,434 DEBUG [RS:0;jenkins-hbase4:34133] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-22 07:10:50,434 INFO [RS:0;jenkins-hbase4:34133] regionserver.LeaseManager(133): Closed leases 2023-07-22 07:10:50,446 INFO [RS:0;jenkins-hbase4:34133] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-07-22 07:10:50,446 INFO [RS:0;jenkins-hbase4:34133] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-22 07:10:50,447 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-22 07:10:50,447 INFO [RS:0;jenkins-hbase4:34133] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-22 07:10:50,447 INFO [RS:0;jenkins-hbase4:34133] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-22 07:10:50,448 INFO [RS:0;jenkins-hbase4:34133] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:34133 2023-07-22 07:10:50,452 DEBUG [RS:3;jenkins-hbase4:33357] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/oldWALs 2023-07-22 07:10:50,452 INFO [RS:3;jenkins-hbase4:33357] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C33357%2C1690009829125:(num 1690009829529) 2023-07-22 07:10:50,453 DEBUG [RS:3;jenkins-hbase4:33357] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-22 07:10:50,453 INFO [RS:3;jenkins-hbase4:33357] regionserver.LeaseManager(133): Closed leases 2023-07-22 07:10:50,455 INFO [RS:3;jenkins-hbase4:33357] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-07-22 07:10:50,456 INFO [RS:3;jenkins-hbase4:33357] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-22 07:10:50,456 INFO [RS:3;jenkins-hbase4:33357] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-22 07:10:50,456 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-22 07:10:50,456 INFO [RS:3;jenkins-hbase4:33357] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-22 07:10:50,458 INFO [RS:3;jenkins-hbase4:33357] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:33357 2023-07-22 07:10:50,458 DEBUG [Listener at localhost/46507-EventThread] zookeeper.ZKWatcher(600): regionserver:41787-0x1018bdde7740002, quorum=127.0.0.1:56256, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,34133,1690009825283 2023-07-22 07:10:50,458 DEBUG [Listener at localhost/46507-EventThread] zookeeper.ZKWatcher(600): regionserver:34133-0x1018bdde7740001, quorum=127.0.0.1:56256, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,34133,1690009825283 2023-07-22 07:10:50,458 DEBUG [Listener at localhost/46507-EventThread] zookeeper.ZKWatcher(600): regionserver:41787-0x1018bdde7740002, quorum=127.0.0.1:56256, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-22 07:10:50,458 DEBUG [Listener at localhost/46507-EventThread] zookeeper.ZKWatcher(600): regionserver:33357-0x1018bdde774000b, quorum=127.0.0.1:56256, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,34133,1690009825283 2023-07-22 07:10:50,458 DEBUG [Listener at localhost/46507-EventThread] zookeeper.ZKWatcher(600): regionserver:34133-0x1018bdde7740001, quorum=127.0.0.1:56256, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-22 07:10:50,458 DEBUG [Listener at localhost/46507-EventThread] zookeeper.ZKWatcher(600): regionserver:33357-0x1018bdde774000b, quorum=127.0.0.1:56256, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-22 07:10:50,458 DEBUG [Listener at localhost/46507-EventThread] zookeeper.ZKWatcher(600): regionserver:39057-0x1018bdde7740003, quorum=127.0.0.1:56256, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,34133,1690009825283 2023-07-22 07:10:50,458 DEBUG [Listener at localhost/46507-EventThread] zookeeper.ZKWatcher(600): master:37061-0x1018bdde7740000, quorum=127.0.0.1:56256, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-22 07:10:50,459 DEBUG [Listener at localhost/46507-EventThread] zookeeper.ZKWatcher(600): regionserver:39057-0x1018bdde7740003, quorum=127.0.0.1:56256, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-22 07:10:50,460 DEBUG [Listener at localhost/46507-EventThread] zookeeper.ZKWatcher(600): regionserver:34133-0x1018bdde7740001, quorum=127.0.0.1:56256, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,33357,1690009829125 2023-07-22 07:10:50,460 DEBUG [Listener at localhost/46507-EventThread] zookeeper.ZKWatcher(600): regionserver:41787-0x1018bdde7740002, quorum=127.0.0.1:56256, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,33357,1690009829125 2023-07-22 07:10:50,460 DEBUG [Listener at localhost/46507-EventThread] zookeeper.ZKWatcher(600): regionserver:39057-0x1018bdde7740003, quorum=127.0.0.1:56256, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,33357,1690009829125 2023-07-22 07:10:50,460 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-22 07:10:50,460 DEBUG [Listener at localhost/46507-EventThread] zookeeper.ZKWatcher(600): regionserver:33357-0x1018bdde774000b, quorum=127.0.0.1:56256, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,33357,1690009829125 2023-07-22 07:10:50,461 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,34133,1690009825283] 2023-07-22 07:10:50,461 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,34133,1690009825283; numProcessing=1 2023-07-22 07:10:50,461 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-22 07:10:50,462 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,34133,1690009825283 already deleted, retry=false 2023-07-22 07:10:50,462 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,34133,1690009825283 expired; onlineServers=3 2023-07-22 07:10:50,462 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,33357,1690009829125] 2023-07-22 07:10:50,462 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-22 07:10:50,462 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,33357,1690009829125; numProcessing=2 2023-07-22 07:10:50,463 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-22 07:10:50,463 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,33357,1690009829125 already deleted, retry=false 2023-07-22 07:10:50,463 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,33357,1690009829125 expired; onlineServers=2 2023-07-22 07:10:50,478 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=33.39 KB at sequenceid=202 (bloomFilter=false), to=hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/data/hbase/meta/1588230740/.tmp/info/da51e380bb294f9595edb002386f86a9 2023-07-22 07:10:50,478 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=78 B at sequenceid=6 (bloomFilter=true), to=hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/data/hbase/namespace/5e60efed13e0136703971d9f91095391/.tmp/info/bae1af66838440419c3162fd4390914f 2023-07-22 07:10:50,485 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for da51e380bb294f9595edb002386f86a9 2023-07-22 07:10:50,486 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/data/hbase/namespace/5e60efed13e0136703971d9f91095391/.tmp/info/bae1af66838440419c3162fd4390914f as hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/data/hbase/namespace/5e60efed13e0136703971d9f91095391/info/bae1af66838440419c3162fd4390914f 2023-07-22 07:10:50,491 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/data/hbase/namespace/5e60efed13e0136703971d9f91095391/info/bae1af66838440419c3162fd4390914f, entries=2, sequenceid=6, filesize=4.8 K 2023-07-22 07:10:50,493 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~78 B/78, heapSize ~472 B/472, currentSize=0 B/0 for 5e60efed13e0136703971d9f91095391 in 87ms, sequenceid=6, compaction requested=false 2023-07-22 07:10:50,500 INFO [regionserver/jenkins-hbase4:0.Chore.1] hbase.ScheduledChore(146): Chore: MemstoreFlusherChore was stopped 2023-07-22 07:10:50,500 INFO [regionserver/jenkins-hbase4:0.Chore.1] hbase.ScheduledChore(146): Chore: CompactionChecker was stopped 2023-07-22 07:10:50,501 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=868 B at sequenceid=202 (bloomFilter=false), to=hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/data/hbase/meta/1588230740/.tmp/rep_barrier/629ae430efcc4217b6ee82db1a91960b 2023-07-22 07:10:50,502 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/data/hbase/namespace/5e60efed13e0136703971d9f91095391/recovered.edits/9.seqid, newMaxSeqId=9, maxSeqId=1 2023-07-22 07:10:50,502 INFO [regionserver/jenkins-hbase4:0.Chore.1] hbase.ScheduledChore(146): Chore: CompactionChecker was stopped 2023-07-22 07:10:50,502 INFO [regionserver/jenkins-hbase4:0.Chore.1] hbase.ScheduledChore(146): Chore: MemstoreFlusherChore was stopped 2023-07-22 07:10:50,503 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:namespace,,1690009828258.5e60efed13e0136703971d9f91095391. 2023-07-22 07:10:50,503 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 5e60efed13e0136703971d9f91095391: 2023-07-22 07:10:50,503 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:namespace,,1690009828258.5e60efed13e0136703971d9f91095391. 2023-07-22 07:10:50,504 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing b6b55dee15272a98eb856a00e0a41f50, disabling compactions & flushes 2023-07-22 07:10:50,504 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region unmovedTable,,1690009845235.b6b55dee15272a98eb856a00e0a41f50. 2023-07-22 07:10:50,504 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on unmovedTable,,1690009845235.b6b55dee15272a98eb856a00e0a41f50. 2023-07-22 07:10:50,504 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on unmovedTable,,1690009845235.b6b55dee15272a98eb856a00e0a41f50. after waiting 0 ms 2023-07-22 07:10:50,504 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region unmovedTable,,1690009845235.b6b55dee15272a98eb856a00e0a41f50. 2023-07-22 07:10:50,507 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/data/default/unmovedTable/b6b55dee15272a98eb856a00e0a41f50/recovered.edits/10.seqid, newMaxSeqId=10, maxSeqId=7 2023-07-22 07:10:50,508 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed unmovedTable,,1690009845235.b6b55dee15272a98eb856a00e0a41f50. 2023-07-22 07:10:50,508 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for b6b55dee15272a98eb856a00e0a41f50: 2023-07-22 07:10:50,508 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed unmovedTable,,1690009845235.b6b55dee15272a98eb856a00e0a41f50. 2023-07-22 07:10:50,509 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing ab3ddd109495a2bae7ed1c5a746f16d4, disabling compactions & flushes 2023-07-22 07:10:50,509 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1690009828408.ab3ddd109495a2bae7ed1c5a746f16d4. 2023-07-22 07:10:50,509 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1690009828408.ab3ddd109495a2bae7ed1c5a746f16d4. 2023-07-22 07:10:50,509 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1690009828408.ab3ddd109495a2bae7ed1c5a746f16d4. after waiting 0 ms 2023-07-22 07:10:50,509 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1690009828408.ab3ddd109495a2bae7ed1c5a746f16d4. 2023-07-22 07:10:50,509 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing ab3ddd109495a2bae7ed1c5a746f16d4 1/1 column families, dataSize=28.50 KB heapSize=46.77 KB 2023-07-22 07:10:50,509 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 629ae430efcc4217b6ee82db1a91960b 2023-07-22 07:10:50,527 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=2.07 KB at sequenceid=202 (bloomFilter=false), to=hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/data/hbase/meta/1588230740/.tmp/table/84e83d73ca344ae5affa26fb7a086479 2023-07-22 07:10:50,530 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=28.50 KB at sequenceid=95 (bloomFilter=true), to=hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/data/hbase/rsgroup/ab3ddd109495a2bae7ed1c5a746f16d4/.tmp/m/bcfa690ff63e4e90a055e25d82fee91d 2023-07-22 07:10:50,533 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 84e83d73ca344ae5affa26fb7a086479 2023-07-22 07:10:50,535 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/data/hbase/meta/1588230740/.tmp/info/da51e380bb294f9595edb002386f86a9 as hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/data/hbase/meta/1588230740/info/da51e380bb294f9595edb002386f86a9 2023-07-22 07:10:50,539 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for bcfa690ff63e4e90a055e25d82fee91d 2023-07-22 07:10:50,540 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/data/hbase/rsgroup/ab3ddd109495a2bae7ed1c5a746f16d4/.tmp/m/bcfa690ff63e4e90a055e25d82fee91d as hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/data/hbase/rsgroup/ab3ddd109495a2bae7ed1c5a746f16d4/m/bcfa690ff63e4e90a055e25d82fee91d 2023-07-22 07:10:50,545 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for da51e380bb294f9595edb002386f86a9 2023-07-22 07:10:50,545 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/data/hbase/meta/1588230740/info/da51e380bb294f9595edb002386f86a9, entries=52, sequenceid=202, filesize=10.7 K 2023-07-22 07:10:50,547 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/data/hbase/meta/1588230740/.tmp/rep_barrier/629ae430efcc4217b6ee82db1a91960b as hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/data/hbase/meta/1588230740/rep_barrier/629ae430efcc4217b6ee82db1a91960b 2023-07-22 07:10:50,551 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for bcfa690ff63e4e90a055e25d82fee91d 2023-07-22 07:10:50,551 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/data/hbase/rsgroup/ab3ddd109495a2bae7ed1c5a746f16d4/m/bcfa690ff63e4e90a055e25d82fee91d, entries=28, sequenceid=95, filesize=6.1 K 2023-07-22 07:10:50,555 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~28.50 KB/29185, heapSize ~46.75 KB/47872, currentSize=0 B/0 for ab3ddd109495a2bae7ed1c5a746f16d4 in 46ms, sequenceid=95, compaction requested=false 2023-07-22 07:10:50,566 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/data/hbase/rsgroup/ab3ddd109495a2bae7ed1c5a746f16d4/recovered.edits/98.seqid, newMaxSeqId=98, maxSeqId=1 2023-07-22 07:10:50,567 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-22 07:10:50,568 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1690009828408.ab3ddd109495a2bae7ed1c5a746f16d4. 2023-07-22 07:10:50,569 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for ab3ddd109495a2bae7ed1c5a746f16d4: 2023-07-22 07:10:50,569 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:rsgroup,,1690009828408.ab3ddd109495a2bae7ed1c5a746f16d4. 2023-07-22 07:10:50,570 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 629ae430efcc4217b6ee82db1a91960b 2023-07-22 07:10:50,570 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/data/hbase/meta/1588230740/rep_barrier/629ae430efcc4217b6ee82db1a91960b, entries=8, sequenceid=202, filesize=5.8 K 2023-07-22 07:10:50,571 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/data/hbase/meta/1588230740/.tmp/table/84e83d73ca344ae5affa26fb7a086479 as hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/data/hbase/meta/1588230740/table/84e83d73ca344ae5affa26fb7a086479 2023-07-22 07:10:50,572 INFO [RS:3;jenkins-hbase4:33357] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,33357,1690009829125; zookeeper connection closed. 2023-07-22 07:10:50,571 DEBUG [Listener at localhost/46507-EventThread] zookeeper.ZKWatcher(600): regionserver:33357-0x1018bdde774000b, quorum=127.0.0.1:56256, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-22 07:10:50,572 DEBUG [Listener at localhost/46507-EventThread] zookeeper.ZKWatcher(600): regionserver:33357-0x1018bdde774000b, quorum=127.0.0.1:56256, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-22 07:10:50,572 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@790ccc3f] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@790ccc3f 2023-07-22 07:10:50,580 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 84e83d73ca344ae5affa26fb7a086479 2023-07-22 07:10:50,580 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/data/hbase/meta/1588230740/table/84e83d73ca344ae5affa26fb7a086479, entries=16, sequenceid=202, filesize=6.0 K 2023-07-22 07:10:50,581 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~36.31 KB/37186, heapSize ~59.17 KB/60592, currentSize=0 B/0 for 1588230740 in 173ms, sequenceid=202, compaction requested=false 2023-07-22 07:10:50,581 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:meta' 2023-07-22 07:10:50,594 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/data/hbase/meta/1588230740/recovered.edits/205.seqid, newMaxSeqId=205, maxSeqId=93 2023-07-22 07:10:50,595 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-22 07:10:50,596 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-22 07:10:50,596 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-22 07:10:50,596 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:meta,,1.1588230740 2023-07-22 07:10:50,604 INFO [RS:2;jenkins-hbase4:39057] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,39057,1690009825637; all regions closed. 2023-07-22 07:10:50,608 INFO [RS:1;jenkins-hbase4:41787] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,41787,1690009825478; all regions closed. 2023-07-22 07:10:50,608 WARN [Close-WAL-Writer-0] asyncfs.FanOutOneBlockAsyncDFSOutputHelper(641): complete file /user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/WALs/jenkins-hbase4.apache.org,39057,1690009825637/jenkins-hbase4.apache.org%2C39057%2C1690009825637.meta.1690009828042.meta not finished, retry = 0 2023-07-22 07:10:50,616 DEBUG [RS:1;jenkins-hbase4:41787] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/oldWALs 2023-07-22 07:10:50,616 INFO [RS:1;jenkins-hbase4:41787] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C41787%2C1690009825478.meta:.meta(num 1690009836092) 2023-07-22 07:10:50,628 DEBUG [RS:1;jenkins-hbase4:41787] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/oldWALs 2023-07-22 07:10:50,629 INFO [RS:1;jenkins-hbase4:41787] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C41787%2C1690009825478:(num 1690009827702) 2023-07-22 07:10:50,629 DEBUG [RS:1;jenkins-hbase4:41787] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-22 07:10:50,629 INFO [RS:1;jenkins-hbase4:41787] regionserver.LeaseManager(133): Closed leases 2023-07-22 07:10:50,629 INFO [RS:1;jenkins-hbase4:41787] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-07-22 07:10:50,629 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-22 07:10:50,630 INFO [RS:1;jenkins-hbase4:41787] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:41787 2023-07-22 07:10:50,634 DEBUG [Listener at localhost/46507-EventThread] zookeeper.ZKWatcher(600): regionserver:39057-0x1018bdde7740003, quorum=127.0.0.1:56256, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,41787,1690009825478 2023-07-22 07:10:50,634 DEBUG [Listener at localhost/46507-EventThread] zookeeper.ZKWatcher(600): regionserver:41787-0x1018bdde7740002, quorum=127.0.0.1:56256, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,41787,1690009825478 2023-07-22 07:10:50,634 DEBUG [Listener at localhost/46507-EventThread] zookeeper.ZKWatcher(600): master:37061-0x1018bdde7740000, quorum=127.0.0.1:56256, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-22 07:10:50,635 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,41787,1690009825478] 2023-07-22 07:10:50,635 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,41787,1690009825478; numProcessing=3 2023-07-22 07:10:50,638 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,41787,1690009825478 already deleted, retry=false 2023-07-22 07:10:50,638 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,41787,1690009825478 expired; onlineServers=1 2023-07-22 07:10:50,711 DEBUG [RS:2;jenkins-hbase4:39057] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/oldWALs 2023-07-22 07:10:50,711 INFO [RS:2;jenkins-hbase4:39057] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C39057%2C1690009825637.meta:.meta(num 1690009828042) 2023-07-22 07:10:50,719 DEBUG [RS:2;jenkins-hbase4:39057] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/oldWALs 2023-07-22 07:10:50,719 INFO [RS:2;jenkins-hbase4:39057] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C39057%2C1690009825637:(num 1690009827700) 2023-07-22 07:10:50,719 DEBUG [RS:2;jenkins-hbase4:39057] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-22 07:10:50,720 INFO [RS:2;jenkins-hbase4:39057] regionserver.LeaseManager(133): Closed leases 2023-07-22 07:10:50,720 INFO [RS:2;jenkins-hbase4:39057] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-07-22 07:10:50,720 INFO [RS:2;jenkins-hbase4:39057] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-22 07:10:50,720 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-22 07:10:50,720 INFO [RS:2;jenkins-hbase4:39057] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-22 07:10:50,720 INFO [RS:2;jenkins-hbase4:39057] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-22 07:10:50,721 INFO [RS:2;jenkins-hbase4:39057] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:39057 2023-07-22 07:10:50,722 DEBUG [Listener at localhost/46507-EventThread] zookeeper.ZKWatcher(600): regionserver:39057-0x1018bdde7740003, quorum=127.0.0.1:56256, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,39057,1690009825637 2023-07-22 07:10:50,723 DEBUG [Listener at localhost/46507-EventThread] zookeeper.ZKWatcher(600): master:37061-0x1018bdde7740000, quorum=127.0.0.1:56256, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-22 07:10:50,724 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,39057,1690009825637] 2023-07-22 07:10:50,724 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,39057,1690009825637; numProcessing=4 2023-07-22 07:10:50,725 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,39057,1690009825637 already deleted, retry=false 2023-07-22 07:10:50,725 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,39057,1690009825637 expired; onlineServers=0 2023-07-22 07:10:50,725 INFO [RegionServerTracker-0] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,37061,1690009823266' ***** 2023-07-22 07:10:50,725 INFO [RegionServerTracker-0] regionserver.HRegionServer(2311): STOPPED: Cluster shutdown set; onlineServer=0 2023-07-22 07:10:50,726 DEBUG [M:0;jenkins-hbase4:37061] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@48f18c91, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-22 07:10:50,726 INFO [M:0;jenkins-hbase4:37061] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-22 07:10:50,729 INFO [M:0;jenkins-hbase4:37061] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@161bf7cb{master,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/master} 2023-07-22 07:10:50,729 INFO [M:0;jenkins-hbase4:37061] server.AbstractConnector(383): Stopped ServerConnector@7b5904dd{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-22 07:10:50,729 INFO [M:0;jenkins-hbase4:37061] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-22 07:10:50,730 DEBUG [Listener at localhost/46507-EventThread] zookeeper.ZKWatcher(600): master:37061-0x1018bdde7740000, quorum=127.0.0.1:56256, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/master 2023-07-22 07:10:50,730 DEBUG [Listener at localhost/46507-EventThread] zookeeper.ZKWatcher(600): master:37061-0x1018bdde7740000, quorum=127.0.0.1:56256, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-22 07:10:50,730 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:37061-0x1018bdde7740000, quorum=127.0.0.1:56256, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-22 07:10:50,731 INFO [M:0;jenkins-hbase4:37061] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@391456a{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-22 07:10:50,731 INFO [M:0;jenkins-hbase4:37061] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@adb65cd{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1e6bc595-35d7-277c-1036-cd06713ba4c7/hadoop.log.dir/,STOPPED} 2023-07-22 07:10:50,732 INFO [M:0;jenkins-hbase4:37061] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,37061,1690009823266 2023-07-22 07:10:50,732 INFO [M:0;jenkins-hbase4:37061] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,37061,1690009823266; all regions closed. 2023-07-22 07:10:50,732 DEBUG [M:0;jenkins-hbase4:37061] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-22 07:10:50,732 INFO [M:0;jenkins-hbase4:37061] master.HMaster(1491): Stopping master jetty server 2023-07-22 07:10:50,733 INFO [M:0;jenkins-hbase4:37061] server.AbstractConnector(383): Stopped ServerConnector@75b41500{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-22 07:10:50,733 DEBUG [M:0;jenkins-hbase4:37061] cleaner.LogCleaner(198): Cancelling LogCleaner 2023-07-22 07:10:50,733 WARN [OldWALsCleaner-0] cleaner.LogCleaner(186): Interrupted while cleaning old WALs, will try to clean it next round. Exiting. 2023-07-22 07:10:50,733 DEBUG [M:0;jenkins-hbase4:37061] cleaner.HFileCleaner(317): Stopping file delete threads 2023-07-22 07:10:50,733 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1690009827306] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1690009827306,5,FailOnTimeoutGroup] 2023-07-22 07:10:50,733 INFO [M:0;jenkins-hbase4:37061] master.MasterMobCompactionThread(168): Waiting for Mob Compaction Thread to finish... 2023-07-22 07:10:50,733 INFO [M:0;jenkins-hbase4:37061] master.MasterMobCompactionThread(168): Waiting for Region Server Mob Compaction Thread to finish... 2023-07-22 07:10:50,733 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1690009827311] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1690009827311,5,FailOnTimeoutGroup] 2023-07-22 07:10:50,734 INFO [M:0;jenkins-hbase4:37061] hbase.ChoreService(369): Chore service for: master/jenkins-hbase4:0 had [] on shutdown 2023-07-22 07:10:50,734 DEBUG [M:0;jenkins-hbase4:37061] master.HMaster(1512): Stopping service threads 2023-07-22 07:10:50,734 INFO [M:0;jenkins-hbase4:37061] procedure2.RemoteProcedureDispatcher(119): Stopping procedure remote dispatcher 2023-07-22 07:10:50,734 ERROR [M:0;jenkins-hbase4:37061] procedure2.ProcedureExecutor(653): ThreadGroup java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] contains running threads; null: See STDOUT java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] Thread[HFileArchiver-1,5,PEWorkerGroup] Thread[HFileArchiver-2,5,PEWorkerGroup] Thread[HFileArchiver-3,5,PEWorkerGroup] Thread[HFileArchiver-4,5,PEWorkerGroup] Thread[HFileArchiver-5,5,PEWorkerGroup] Thread[HFileArchiver-6,5,PEWorkerGroup] Thread[HFileArchiver-7,5,PEWorkerGroup] Thread[HFileArchiver-8,5,PEWorkerGroup] 2023-07-22 07:10:50,735 INFO [M:0;jenkins-hbase4:37061] region.RegionProcedureStore(113): Stopping the Region Procedure Store, isAbort=false 2023-07-22 07:10:50,735 DEBUG [normalizer-worker-0] normalizer.RegionNormalizerWorker(174): interrupt detected. terminating. 2023-07-22 07:10:50,736 DEBUG [M:0;jenkins-hbase4:37061] zookeeper.ZKUtil(398): master:37061-0x1018bdde7740000, quorum=127.0.0.1:56256, baseZNode=/hbase Unable to get data of znode /hbase/master because node does not exist (not an error) 2023-07-22 07:10:50,736 WARN [M:0;jenkins-hbase4:37061] master.ActiveMasterManager(326): Failed get of master address: java.io.IOException: Can't get master address from ZooKeeper; znode data == null 2023-07-22 07:10:50,736 INFO [M:0;jenkins-hbase4:37061] assignment.AssignmentManager(315): Stopping assignment manager 2023-07-22 07:10:50,736 INFO [M:0;jenkins-hbase4:37061] region.MasterRegion(167): Closing local region {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, isAbort=false 2023-07-22 07:10:50,736 DEBUG [M:0;jenkins-hbase4:37061] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-07-22 07:10:50,736 INFO [M:0;jenkins-hbase4:37061] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-22 07:10:50,736 DEBUG [M:0;jenkins-hbase4:37061] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-22 07:10:50,736 DEBUG [M:0;jenkins-hbase4:37061] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-07-22 07:10:50,736 DEBUG [M:0;jenkins-hbase4:37061] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-22 07:10:50,737 INFO [M:0;jenkins-hbase4:37061] regionserver.HRegion(2745): Flushing 1595e783b53d99cd5eef43b6debb2682 1/1 column families, dataSize=499.82 KB heapSize=597.86 KB 2023-07-22 07:10:50,761 INFO [M:0;jenkins-hbase4:37061] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=499.82 KB at sequenceid=1104 (bloomFilter=true), to=hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/60f4cdf3f9ce479bb96ed59d655ebbf8 2023-07-22 07:10:50,768 DEBUG [M:0;jenkins-hbase4:37061] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/60f4cdf3f9ce479bb96ed59d655ebbf8 as hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/60f4cdf3f9ce479bb96ed59d655ebbf8 2023-07-22 07:10:50,775 INFO [M:0;jenkins-hbase4:37061] regionserver.HStore(1080): Added hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/60f4cdf3f9ce479bb96ed59d655ebbf8, entries=148, sequenceid=1104, filesize=26.2 K 2023-07-22 07:10:50,778 INFO [M:0;jenkins-hbase4:37061] regionserver.HRegion(2948): Finished flush of dataSize ~499.82 KB/511819, heapSize ~597.84 KB/612192, currentSize=0 B/0 for 1595e783b53d99cd5eef43b6debb2682 in 42ms, sequenceid=1104, compaction requested=false 2023-07-22 07:10:50,780 INFO [M:0;jenkins-hbase4:37061] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-22 07:10:50,780 DEBUG [M:0;jenkins-hbase4:37061] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-22 07:10:50,788 INFO [master:store-WAL-Roller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-22 07:10:50,788 INFO [M:0;jenkins-hbase4:37061] flush.MasterFlushTableProcedureManager(83): stop: server shutting down. 2023-07-22 07:10:50,789 INFO [M:0;jenkins-hbase4:37061] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:37061 2023-07-22 07:10:50,791 DEBUG [M:0;jenkins-hbase4:37061] zookeeper.RecoverableZooKeeper(172): Node /hbase/rs/jenkins-hbase4.apache.org,37061,1690009823266 already deleted, retry=false 2023-07-22 07:10:51,172 DEBUG [Listener at localhost/46507-EventThread] zookeeper.ZKWatcher(600): master:37061-0x1018bdde7740000, quorum=127.0.0.1:56256, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-22 07:10:51,172 INFO [M:0;jenkins-hbase4:37061] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,37061,1690009823266; zookeeper connection closed. 2023-07-22 07:10:51,172 DEBUG [Listener at localhost/46507-EventThread] zookeeper.ZKWatcher(600): master:37061-0x1018bdde7740000, quorum=127.0.0.1:56256, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-22 07:10:51,272 DEBUG [Listener at localhost/46507-EventThread] zookeeper.ZKWatcher(600): regionserver:39057-0x1018bdde7740003, quorum=127.0.0.1:56256, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-22 07:10:51,272 INFO [RS:2;jenkins-hbase4:39057] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,39057,1690009825637; zookeeper connection closed. 2023-07-22 07:10:51,273 DEBUG [Listener at localhost/46507-EventThread] zookeeper.ZKWatcher(600): regionserver:39057-0x1018bdde7740003, quorum=127.0.0.1:56256, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-22 07:10:51,273 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@3789a8d8] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@3789a8d8 2023-07-22 07:10:51,373 DEBUG [Listener at localhost/46507-EventThread] zookeeper.ZKWatcher(600): regionserver:41787-0x1018bdde7740002, quorum=127.0.0.1:56256, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-22 07:10:51,373 INFO [RS:1;jenkins-hbase4:41787] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,41787,1690009825478; zookeeper connection closed. 2023-07-22 07:10:51,373 DEBUG [Listener at localhost/46507-EventThread] zookeeper.ZKWatcher(600): regionserver:41787-0x1018bdde7740002, quorum=127.0.0.1:56256, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-22 07:10:51,373 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@2218299b] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@2218299b 2023-07-22 07:10:51,473 DEBUG [Listener at localhost/46507-EventThread] zookeeper.ZKWatcher(600): regionserver:34133-0x1018bdde7740001, quorum=127.0.0.1:56256, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-22 07:10:51,473 DEBUG [Listener at localhost/46507-EventThread] zookeeper.ZKWatcher(600): regionserver:34133-0x1018bdde7740001, quorum=127.0.0.1:56256, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-22 07:10:51,473 INFO [RS:0;jenkins-hbase4:34133] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,34133,1690009825283; zookeeper connection closed. 2023-07-22 07:10:51,473 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@4bbd9cf0] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@4bbd9cf0 2023-07-22 07:10:51,474 INFO [Listener at localhost/46507] util.JVMClusterUtil(335): Shutdown of 1 master(s) and 4 regionserver(s) complete 2023-07-22 07:10:51,474 WARN [Listener at localhost/46507] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-22 07:10:51,483 INFO [Listener at localhost/46507] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-22 07:10:51,587 WARN [BP-1233006246-172.31.14.131-1690009819581 heartbeating to localhost/127.0.0.1:40817] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-22 07:10:51,587 WARN [BP-1233006246-172.31.14.131-1690009819581 heartbeating to localhost/127.0.0.1:40817] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1233006246-172.31.14.131-1690009819581 (Datanode Uuid 242650d5-ed9b-4880-bf32-8e1e5bc4eed9) service to localhost/127.0.0.1:40817 2023-07-22 07:10:51,589 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1e6bc595-35d7-277c-1036-cd06713ba4c7/cluster_00bb9b6e-43e0-f121-1ccb-023e0a721165/dfs/data/data5/current/BP-1233006246-172.31.14.131-1690009819581] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-22 07:10:51,590 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1e6bc595-35d7-277c-1036-cd06713ba4c7/cluster_00bb9b6e-43e0-f121-1ccb-023e0a721165/dfs/data/data6/current/BP-1233006246-172.31.14.131-1690009819581] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-22 07:10:51,592 WARN [Listener at localhost/46507] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-22 07:10:51,596 INFO [Listener at localhost/46507] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-22 07:10:51,699 WARN [BP-1233006246-172.31.14.131-1690009819581 heartbeating to localhost/127.0.0.1:40817] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-22 07:10:51,699 WARN [BP-1233006246-172.31.14.131-1690009819581 heartbeating to localhost/127.0.0.1:40817] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1233006246-172.31.14.131-1690009819581 (Datanode Uuid ff2595ce-0bca-45d5-828a-c0e725ce9d4b) service to localhost/127.0.0.1:40817 2023-07-22 07:10:51,700 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1e6bc595-35d7-277c-1036-cd06713ba4c7/cluster_00bb9b6e-43e0-f121-1ccb-023e0a721165/dfs/data/data3/current/BP-1233006246-172.31.14.131-1690009819581] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-22 07:10:51,701 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1e6bc595-35d7-277c-1036-cd06713ba4c7/cluster_00bb9b6e-43e0-f121-1ccb-023e0a721165/dfs/data/data4/current/BP-1233006246-172.31.14.131-1690009819581] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-22 07:10:51,702 WARN [Listener at localhost/46507] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-22 07:10:51,705 INFO [Listener at localhost/46507] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-22 07:10:51,807 WARN [BP-1233006246-172.31.14.131-1690009819581 heartbeating to localhost/127.0.0.1:40817] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-22 07:10:51,808 WARN [BP-1233006246-172.31.14.131-1690009819581 heartbeating to localhost/127.0.0.1:40817] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1233006246-172.31.14.131-1690009819581 (Datanode Uuid d5e292e5-0ef3-4a74-8b1e-f2f2dde23363) service to localhost/127.0.0.1:40817 2023-07-22 07:10:51,808 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1e6bc595-35d7-277c-1036-cd06713ba4c7/cluster_00bb9b6e-43e0-f121-1ccb-023e0a721165/dfs/data/data1/current/BP-1233006246-172.31.14.131-1690009819581] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-22 07:10:51,809 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1e6bc595-35d7-277c-1036-cd06713ba4c7/cluster_00bb9b6e-43e0-f121-1ccb-023e0a721165/dfs/data/data2/current/BP-1233006246-172.31.14.131-1690009819581] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-22 07:10:51,840 INFO [Listener at localhost/46507] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-22 07:10:51,962 INFO [Listener at localhost/46507] zookeeper.MiniZooKeeperCluster(344): Shutdown MiniZK cluster with all ZK servers 2023-07-22 07:10:52,037 INFO [Listener at localhost/46507] hbase.HBaseTestingUtility(1293): Minicluster is down 2023-07-22 07:10:52,037 INFO [Listener at localhost/46507] hbase.HBaseTestingUtility(1068): Starting up minicluster with option: StartMiniClusterOption{numMasters=1, masterClass=null, numRegionServers=3, rsPorts=, rsClass=null, numDataNodes=3, dataNodeHosts=null, numZkServers=1, createRootDir=false, createWALDir=false} 2023-07-22 07:10:52,038 INFO [Listener at localhost/46507] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.log.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1e6bc595-35d7-277c-1036-cd06713ba4c7/hadoop.log.dir so I do NOT create it in target/test-data/3e189d44-8ba9-11ba-8a1b-a3392a171034 2023-07-22 07:10:52,038 INFO [Listener at localhost/46507] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.tmp.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1e6bc595-35d7-277c-1036-cd06713ba4c7/hadoop.tmp.dir so I do NOT create it in target/test-data/3e189d44-8ba9-11ba-8a1b-a3392a171034 2023-07-22 07:10:52,038 INFO [Listener at localhost/46507] hbase.HBaseZKTestingUtility(82): Created new mini-cluster data directory: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/3e189d44-8ba9-11ba-8a1b-a3392a171034/cluster_27d71ade-ddaa-10e5-068b-4e91bdc3e5c4, deleteOnExit=true 2023-07-22 07:10:52,038 INFO [Listener at localhost/46507] hbase.HBaseTestingUtility(1082): STARTING DFS 2023-07-22 07:10:52,038 INFO [Listener at localhost/46507] hbase.HBaseTestingUtility(772): Setting test.cache.data to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/3e189d44-8ba9-11ba-8a1b-a3392a171034/test.cache.data in system properties and HBase conf 2023-07-22 07:10:52,038 INFO [Listener at localhost/46507] hbase.HBaseTestingUtility(772): Setting hadoop.tmp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/3e189d44-8ba9-11ba-8a1b-a3392a171034/hadoop.tmp.dir in system properties and HBase conf 2023-07-22 07:10:52,038 INFO [Listener at localhost/46507] hbase.HBaseTestingUtility(772): Setting hadoop.log.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/3e189d44-8ba9-11ba-8a1b-a3392a171034/hadoop.log.dir in system properties and HBase conf 2023-07-22 07:10:52,038 INFO [Listener at localhost/46507] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.local.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/3e189d44-8ba9-11ba-8a1b-a3392a171034/mapreduce.cluster.local.dir in system properties and HBase conf 2023-07-22 07:10:52,039 INFO [Listener at localhost/46507] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.temp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/3e189d44-8ba9-11ba-8a1b-a3392a171034/mapreduce.cluster.temp.dir in system properties and HBase conf 2023-07-22 07:10:52,039 INFO [Listener at localhost/46507] hbase.HBaseTestingUtility(759): read short circuit is OFF 2023-07-22 07:10:52,039 DEBUG [Listener at localhost/46507] fs.HFileSystem(308): The file system is not a DistributedFileSystem. Skipping on block location reordering 2023-07-22 07:10:52,039 INFO [Listener at localhost/46507] hbase.HBaseTestingUtility(772): Setting yarn.node-labels.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/3e189d44-8ba9-11ba-8a1b-a3392a171034/yarn.node-labels.fs-store.root-dir in system properties and HBase conf 2023-07-22 07:10:52,039 INFO [Listener at localhost/46507] hbase.HBaseTestingUtility(772): Setting yarn.node-attribute.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/3e189d44-8ba9-11ba-8a1b-a3392a171034/yarn.node-attribute.fs-store.root-dir in system properties and HBase conf 2023-07-22 07:10:52,039 INFO [Listener at localhost/46507] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.log-dirs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/3e189d44-8ba9-11ba-8a1b-a3392a171034/yarn.nodemanager.log-dirs in system properties and HBase conf 2023-07-22 07:10:52,040 INFO [Listener at localhost/46507] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/3e189d44-8ba9-11ba-8a1b-a3392a171034/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-07-22 07:10:52,040 INFO [Listener at localhost/46507] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.active-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/3e189d44-8ba9-11ba-8a1b-a3392a171034/yarn.timeline-service.entity-group-fs-store.active-dir in system properties and HBase conf 2023-07-22 07:10:52,040 INFO [Listener at localhost/46507] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.done-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/3e189d44-8ba9-11ba-8a1b-a3392a171034/yarn.timeline-service.entity-group-fs-store.done-dir in system properties and HBase conf 2023-07-22 07:10:52,040 INFO [Listener at localhost/46507] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/3e189d44-8ba9-11ba-8a1b-a3392a171034/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-07-22 07:10:52,040 INFO [Listener at localhost/46507] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/3e189d44-8ba9-11ba-8a1b-a3392a171034/dfs.journalnode.edits.dir in system properties and HBase conf 2023-07-22 07:10:52,040 INFO [Listener at localhost/46507] hbase.HBaseTestingUtility(772): Setting dfs.datanode.shared.file.descriptor.paths to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/3e189d44-8ba9-11ba-8a1b-a3392a171034/dfs.datanode.shared.file.descriptor.paths in system properties and HBase conf 2023-07-22 07:10:52,040 INFO [Listener at localhost/46507] hbase.HBaseTestingUtility(772): Setting nfs.dump.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/3e189d44-8ba9-11ba-8a1b-a3392a171034/nfs.dump.dir in system properties and HBase conf 2023-07-22 07:10:52,040 INFO [Listener at localhost/46507] hbase.HBaseTestingUtility(772): Setting java.io.tmpdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/3e189d44-8ba9-11ba-8a1b-a3392a171034/java.io.tmpdir in system properties and HBase conf 2023-07-22 07:10:52,041 INFO [Listener at localhost/46507] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/3e189d44-8ba9-11ba-8a1b-a3392a171034/dfs.journalnode.edits.dir in system properties and HBase conf 2023-07-22 07:10:52,041 INFO [Listener at localhost/46507] hbase.HBaseTestingUtility(772): Setting dfs.provided.aliasmap.inmemory.leveldb.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/3e189d44-8ba9-11ba-8a1b-a3392a171034/dfs.provided.aliasmap.inmemory.leveldb.dir in system properties and HBase conf 2023-07-22 07:10:52,041 INFO [Listener at localhost/46507] hbase.HBaseTestingUtility(772): Setting fs.s3a.committer.staging.tmp.path to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/3e189d44-8ba9-11ba-8a1b-a3392a171034/fs.s3a.committer.staging.tmp.path in system properties and HBase conf Formatting using clusterid: testClusterID 2023-07-22 07:10:52,045 WARN [Listener at localhost/46507] conf.Configuration(1701): No unit for dfs.heartbeat.interval(3) assuming SECONDS 2023-07-22 07:10:52,046 WARN [Listener at localhost/46507] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-07-22 07:10:52,058 DEBUG [Listener at localhost/46507-EventThread] zookeeper.ZKWatcher(600): VerifyingRSGroupAdminClient-0x1018bdde774000a, quorum=127.0.0.1:56256, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Disconnected, path=null 2023-07-22 07:10:52,058 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(630): VerifyingRSGroupAdminClient-0x1018bdde774000a, quorum=127.0.0.1:56256, baseZNode=/hbase Received Disconnected from ZooKeeper, ignoring 2023-07-22 07:10:52,091 WARN [Listener at localhost/46507] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-22 07:10:52,093 INFO [Listener at localhost/46507] log.Slf4jLog(67): jetty-6.1.26 2023-07-22 07:10:52,097 INFO [Listener at localhost/46507] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/hdfs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/3e189d44-8ba9-11ba-8a1b-a3392a171034/java.io.tmpdir/Jetty_localhost_36575_hdfs____pjd1yy/webapp 2023-07-22 07:10:52,191 INFO [Listener at localhost/46507] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:36575 2023-07-22 07:10:52,196 WARN [Listener at localhost/46507] conf.Configuration(1701): No unit for dfs.heartbeat.interval(3) assuming SECONDS 2023-07-22 07:10:52,196 WARN [Listener at localhost/46507] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-07-22 07:10:52,238 WARN [Listener at localhost/45035] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-22 07:10:52,251 WARN [Listener at localhost/45035] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-22 07:10:52,254 WARN [Listener at localhost/45035] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-22 07:10:52,255 INFO [Listener at localhost/45035] log.Slf4jLog(67): jetty-6.1.26 2023-07-22 07:10:52,259 INFO [Listener at localhost/45035] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/3e189d44-8ba9-11ba-8a1b-a3392a171034/java.io.tmpdir/Jetty_localhost_43245_datanode____.k1570l/webapp 2023-07-22 07:10:52,352 INFO [Listener at localhost/45035] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:43245 2023-07-22 07:10:52,358 WARN [Listener at localhost/35811] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-22 07:10:52,374 WARN [Listener at localhost/35811] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-22 07:10:52,376 WARN [Listener at localhost/35811] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-22 07:10:52,377 INFO [Listener at localhost/35811] log.Slf4jLog(67): jetty-6.1.26 2023-07-22 07:10:52,380 INFO [Listener at localhost/35811] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/3e189d44-8ba9-11ba-8a1b-a3392a171034/java.io.tmpdir/Jetty_localhost_35245_datanode____bqbp4o/webapp 2023-07-22 07:10:52,496 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x476547027ff659f8: Processing first storage report for DS-d15232c5-7700-4629-b936-7e8939fa5ac2 from datanode 601ca87e-2465-4a2d-a0c5-3f987c29b362 2023-07-22 07:10:52,497 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x476547027ff659f8: from storage DS-d15232c5-7700-4629-b936-7e8939fa5ac2 node DatanodeRegistration(127.0.0.1:46223, datanodeUuid=601ca87e-2465-4a2d-a0c5-3f987c29b362, infoPort=38697, infoSecurePort=0, ipcPort=35811, storageInfo=lv=-57;cid=testClusterID;nsid=1459938763;c=1690009852048), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-22 07:10:52,497 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x476547027ff659f8: Processing first storage report for DS-01b2d7fb-96fe-419d-9bf9-ca3c91774d3f from datanode 601ca87e-2465-4a2d-a0c5-3f987c29b362 2023-07-22 07:10:52,497 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x476547027ff659f8: from storage DS-01b2d7fb-96fe-419d-9bf9-ca3c91774d3f node DatanodeRegistration(127.0.0.1:46223, datanodeUuid=601ca87e-2465-4a2d-a0c5-3f987c29b362, infoPort=38697, infoSecurePort=0, ipcPort=35811, storageInfo=lv=-57;cid=testClusterID;nsid=1459938763;c=1690009852048), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-22 07:10:52,508 INFO [Listener at localhost/35811] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:35245 2023-07-22 07:10:52,557 WARN [Listener at localhost/41265] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-22 07:10:52,597 WARN [Listener at localhost/41265] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-22 07:10:52,599 WARN [Listener at localhost/41265] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-22 07:10:52,601 INFO [Listener at localhost/41265] log.Slf4jLog(67): jetty-6.1.26 2023-07-22 07:10:52,609 INFO [Listener at localhost/41265] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/3e189d44-8ba9-11ba-8a1b-a3392a171034/java.io.tmpdir/Jetty_localhost_37233_datanode____cwrtp1/webapp 2023-07-22 07:10:52,689 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xb1a42fc1c1977479: Processing first storage report for DS-77b63c19-a1de-460a-b32e-ac496b2ef823 from datanode 3ee58b69-ca83-43cd-9465-a8785ed8b27f 2023-07-22 07:10:52,690 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xb1a42fc1c1977479: from storage DS-77b63c19-a1de-460a-b32e-ac496b2ef823 node DatanodeRegistration(127.0.0.1:42667, datanodeUuid=3ee58b69-ca83-43cd-9465-a8785ed8b27f, infoPort=38121, infoSecurePort=0, ipcPort=41265, storageInfo=lv=-57;cid=testClusterID;nsid=1459938763;c=1690009852048), blocks: 0, hasStaleStorage: true, processing time: 1 msecs, invalidatedBlocks: 0 2023-07-22 07:10:52,690 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xb1a42fc1c1977479: Processing first storage report for DS-1d46d088-7332-4239-b31a-91dde444f296 from datanode 3ee58b69-ca83-43cd-9465-a8785ed8b27f 2023-07-22 07:10:52,690 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xb1a42fc1c1977479: from storage DS-1d46d088-7332-4239-b31a-91dde444f296 node DatanodeRegistration(127.0.0.1:42667, datanodeUuid=3ee58b69-ca83-43cd-9465-a8785ed8b27f, infoPort=38121, infoSecurePort=0, ipcPort=41265, storageInfo=lv=-57;cid=testClusterID;nsid=1459938763;c=1690009852048), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-22 07:10:52,733 INFO [Listener at localhost/41265] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:37233 2023-07-22 07:10:52,740 WARN [Listener at localhost/37479] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-22 07:10:52,850 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x306c5fc7aa9b805a: Processing first storage report for DS-1963c017-acbd-4f2e-9a11-b62fcb87037a from datanode fdbcbc37-f430-471d-bd54-bf528deae911 2023-07-22 07:10:52,850 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x306c5fc7aa9b805a: from storage DS-1963c017-acbd-4f2e-9a11-b62fcb87037a node DatanodeRegistration(127.0.0.1:38717, datanodeUuid=fdbcbc37-f430-471d-bd54-bf528deae911, infoPort=41549, infoSecurePort=0, ipcPort=37479, storageInfo=lv=-57;cid=testClusterID;nsid=1459938763;c=1690009852048), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-22 07:10:52,850 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x306c5fc7aa9b805a: Processing first storage report for DS-7a39e397-2d8c-4e14-956f-e96f8560d6f3 from datanode fdbcbc37-f430-471d-bd54-bf528deae911 2023-07-22 07:10:52,850 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x306c5fc7aa9b805a: from storage DS-7a39e397-2d8c-4e14-956f-e96f8560d6f3 node DatanodeRegistration(127.0.0.1:38717, datanodeUuid=fdbcbc37-f430-471d-bd54-bf528deae911, infoPort=41549, infoSecurePort=0, ipcPort=37479, storageInfo=lv=-57;cid=testClusterID;nsid=1459938763;c=1690009852048), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-22 07:10:52,951 DEBUG [Listener at localhost/37479] hbase.HBaseTestingUtility(649): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/3e189d44-8ba9-11ba-8a1b-a3392a171034 2023-07-22 07:10:52,954 INFO [Listener at localhost/37479] zookeeper.MiniZooKeeperCluster(258): Started connectionTimeout=30000, dir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/3e189d44-8ba9-11ba-8a1b-a3392a171034/cluster_27d71ade-ddaa-10e5-068b-4e91bdc3e5c4/zookeeper_0, clientPort=56037, secureClientPort=-1, dataDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/3e189d44-8ba9-11ba-8a1b-a3392a171034/cluster_27d71ade-ddaa-10e5-068b-4e91bdc3e5c4/zookeeper_0/version-2, dataDirSize=424 dataLogDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/3e189d44-8ba9-11ba-8a1b-a3392a171034/cluster_27d71ade-ddaa-10e5-068b-4e91bdc3e5c4/zookeeper_0/version-2, dataLogSize=424 tickTime=2000, maxClientCnxns=300, minSessionTimeout=4000, maxSessionTimeout=40000, serverId=0 2023-07-22 07:10:52,955 INFO [Listener at localhost/37479] zookeeper.MiniZooKeeperCluster(283): Started MiniZooKeeperCluster and ran 'stat' on client port=56037 2023-07-22 07:10:52,955 INFO [Listener at localhost/37479] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-22 07:10:52,956 INFO [Listener at localhost/37479] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-22 07:10:52,973 INFO [Listener at localhost/37479] util.FSUtils(471): Created version file at hdfs://localhost:45035/user/jenkins/test-data/563fc294-b7e0-2735-0ff8-3c56829b5c0e with version=8 2023-07-22 07:10:52,973 INFO [Listener at localhost/37479] hbase.HBaseTestingUtility(1408): The hbase.fs.tmp.dir is set to hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/hbase-staging 2023-07-22 07:10:52,974 DEBUG [Listener at localhost/37479] hbase.LocalHBaseCluster(134): Setting Master Port to random. 2023-07-22 07:10:52,974 DEBUG [Listener at localhost/37479] hbase.LocalHBaseCluster(141): Setting RegionServer Port to random. 2023-07-22 07:10:52,974 DEBUG [Listener at localhost/37479] hbase.LocalHBaseCluster(151): Setting RS InfoServer Port to random. 2023-07-22 07:10:52,974 DEBUG [Listener at localhost/37479] hbase.LocalHBaseCluster(159): Setting Master InfoServer Port to random. 2023-07-22 07:10:52,975 INFO [Listener at localhost/37479] client.ConnectionUtils(127): master/jenkins-hbase4:0 server-side Connection retries=45 2023-07-22 07:10:52,975 INFO [Listener at localhost/37479] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-22 07:10:52,975 INFO [Listener at localhost/37479] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-22 07:10:52,975 INFO [Listener at localhost/37479] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-22 07:10:52,975 INFO [Listener at localhost/37479] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-22 07:10:52,976 INFO [Listener at localhost/37479] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-22 07:10:52,976 INFO [Listener at localhost/37479] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.MasterService, hbase.pb.RegionServerStatusService, hbase.pb.LockService, hbase.pb.HbckService, hbase.pb.ClientMetaService, hbase.pb.ClientService, hbase.pb.AdminService 2023-07-22 07:10:52,976 INFO [Listener at localhost/37479] ipc.NettyRpcServer(120): Bind to /172.31.14.131:34769 2023-07-22 07:10:52,977 INFO [Listener at localhost/37479] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-22 07:10:52,978 INFO [Listener at localhost/37479] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-22 07:10:52,979 INFO [Listener at localhost/37479] zookeeper.RecoverableZooKeeper(93): Process identifier=master:34769 connecting to ZooKeeper ensemble=127.0.0.1:56037 2023-07-22 07:10:52,987 DEBUG [Listener at localhost/37479-EventThread] zookeeper.ZKWatcher(600): master:347690x0, quorum=127.0.0.1:56037, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-22 07:10:52,988 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): master:34769-0x1018bde5f1c0000 connected 2023-07-22 07:10:53,006 DEBUG [Listener at localhost/37479] zookeeper.ZKUtil(164): master:34769-0x1018bde5f1c0000, quorum=127.0.0.1:56037, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-22 07:10:53,007 DEBUG [Listener at localhost/37479] zookeeper.ZKUtil(164): master:34769-0x1018bde5f1c0000, quorum=127.0.0.1:56037, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-22 07:10:53,008 DEBUG [Listener at localhost/37479] zookeeper.ZKUtil(164): master:34769-0x1018bde5f1c0000, quorum=127.0.0.1:56037, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-22 07:10:53,009 DEBUG [Listener at localhost/37479] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=34769 2023-07-22 07:10:53,009 DEBUG [Listener at localhost/37479] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=34769 2023-07-22 07:10:53,010 DEBUG [Listener at localhost/37479] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=34769 2023-07-22 07:10:53,010 DEBUG [Listener at localhost/37479] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=34769 2023-07-22 07:10:53,010 DEBUG [Listener at localhost/37479] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=34769 2023-07-22 07:10:53,013 INFO [Listener at localhost/37479] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-22 07:10:53,013 INFO [Listener at localhost/37479] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-22 07:10:53,013 INFO [Listener at localhost/37479] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-22 07:10:53,013 INFO [Listener at localhost/37479] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context master 2023-07-22 07:10:53,014 INFO [Listener at localhost/37479] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-22 07:10:53,014 INFO [Listener at localhost/37479] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-22 07:10:53,014 INFO [Listener at localhost/37479] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-22 07:10:53,014 INFO [Listener at localhost/37479] http.HttpServer(1146): Jetty bound to port 38307 2023-07-22 07:10:53,015 INFO [Listener at localhost/37479] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-22 07:10:53,017 INFO [Listener at localhost/37479] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-22 07:10:53,017 INFO [Listener at localhost/37479] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@6cfc84e3{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/3e189d44-8ba9-11ba-8a1b-a3392a171034/hadoop.log.dir/,AVAILABLE} 2023-07-22 07:10:53,018 INFO [Listener at localhost/37479] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-22 07:10:53,018 INFO [Listener at localhost/37479] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@15f599c4{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-22 07:10:53,147 INFO [Listener at localhost/37479] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-22 07:10:53,148 INFO [Listener at localhost/37479] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-22 07:10:53,148 INFO [Listener at localhost/37479] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-22 07:10:53,148 INFO [Listener at localhost/37479] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-22 07:10:53,151 INFO [Listener at localhost/37479] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-22 07:10:53,152 INFO [Listener at localhost/37479] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@46b1fc36{master,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/3e189d44-8ba9-11ba-8a1b-a3392a171034/java.io.tmpdir/jetty-0_0_0_0-38307-hbase-server-2_4_18-SNAPSHOT_jar-_-any-5083180878457204534/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/master} 2023-07-22 07:10:53,154 INFO [Listener at localhost/37479] server.AbstractConnector(333): Started ServerConnector@6bab3cdc{HTTP/1.1, (http/1.1)}{0.0.0.0:38307} 2023-07-22 07:10:53,154 INFO [Listener at localhost/37479] server.Server(415): Started @35559ms 2023-07-22 07:10:53,154 INFO [Listener at localhost/37479] master.HMaster(444): hbase.rootdir=hdfs://localhost:45035/user/jenkins/test-data/563fc294-b7e0-2735-0ff8-3c56829b5c0e, hbase.cluster.distributed=false 2023-07-22 07:10:53,172 INFO [Listener at localhost/37479] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-22 07:10:53,172 INFO [Listener at localhost/37479] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-22 07:10:53,173 INFO [Listener at localhost/37479] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-22 07:10:53,173 INFO [Listener at localhost/37479] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-22 07:10:53,173 INFO [Listener at localhost/37479] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-22 07:10:53,173 INFO [Listener at localhost/37479] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-22 07:10:53,173 INFO [Listener at localhost/37479] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-22 07:10:53,174 INFO [Listener at localhost/37479] ipc.NettyRpcServer(120): Bind to /172.31.14.131:33331 2023-07-22 07:10:53,175 INFO [Listener at localhost/37479] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-22 07:10:53,178 DEBUG [Listener at localhost/37479] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-22 07:10:53,178 INFO [Listener at localhost/37479] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-22 07:10:53,180 INFO [Listener at localhost/37479] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-22 07:10:53,181 INFO [Listener at localhost/37479] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:33331 connecting to ZooKeeper ensemble=127.0.0.1:56037 2023-07-22 07:10:53,186 DEBUG [Listener at localhost/37479] zookeeper.ZKUtil(164): regionserver:333310x0, quorum=127.0.0.1:56037, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-22 07:10:53,186 DEBUG [Listener at localhost/37479-EventThread] zookeeper.ZKWatcher(600): regionserver:333310x0, quorum=127.0.0.1:56037, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-22 07:10:53,188 DEBUG [Listener at localhost/37479] zookeeper.ZKUtil(164): regionserver:333310x0, quorum=127.0.0.1:56037, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-22 07:10:53,189 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:33331-0x1018bde5f1c0001 connected 2023-07-22 07:10:53,189 DEBUG [Listener at localhost/37479] zookeeper.ZKUtil(164): regionserver:33331-0x1018bde5f1c0001, quorum=127.0.0.1:56037, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-22 07:10:53,193 DEBUG [Listener at localhost/37479] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=33331 2023-07-22 07:10:53,194 DEBUG [Listener at localhost/37479] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=33331 2023-07-22 07:10:53,195 DEBUG [Listener at localhost/37479] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=33331 2023-07-22 07:10:53,200 DEBUG [Listener at localhost/37479] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=33331 2023-07-22 07:10:53,200 DEBUG [Listener at localhost/37479] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=33331 2023-07-22 07:10:53,202 INFO [Listener at localhost/37479] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-22 07:10:53,203 INFO [Listener at localhost/37479] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-22 07:10:53,203 INFO [Listener at localhost/37479] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-22 07:10:53,203 INFO [Listener at localhost/37479] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-22 07:10:53,203 INFO [Listener at localhost/37479] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-22 07:10:53,203 INFO [Listener at localhost/37479] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-22 07:10:53,203 INFO [Listener at localhost/37479] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-22 07:10:53,205 INFO [Listener at localhost/37479] http.HttpServer(1146): Jetty bound to port 34737 2023-07-22 07:10:53,205 INFO [Listener at localhost/37479] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-22 07:10:53,218 INFO [Listener at localhost/37479] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-22 07:10:53,218 INFO [Listener at localhost/37479] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@4c16c29d{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/3e189d44-8ba9-11ba-8a1b-a3392a171034/hadoop.log.dir/,AVAILABLE} 2023-07-22 07:10:53,219 INFO [Listener at localhost/37479] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-22 07:10:53,219 INFO [Listener at localhost/37479] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@7e73adce{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-22 07:10:53,361 INFO [Listener at localhost/37479] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-22 07:10:53,362 INFO [Listener at localhost/37479] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-22 07:10:53,362 INFO [Listener at localhost/37479] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-22 07:10:53,362 INFO [Listener at localhost/37479] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-22 07:10:53,363 INFO [Listener at localhost/37479] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-22 07:10:53,364 INFO [Listener at localhost/37479] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@3ad44ca8{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/3e189d44-8ba9-11ba-8a1b-a3392a171034/java.io.tmpdir/jetty-0_0_0_0-34737-hbase-server-2_4_18-SNAPSHOT_jar-_-any-8163130325940552570/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-22 07:10:53,367 INFO [Listener at localhost/37479] server.AbstractConnector(333): Started ServerConnector@6c4b4518{HTTP/1.1, (http/1.1)}{0.0.0.0:34737} 2023-07-22 07:10:53,367 INFO [Listener at localhost/37479] server.Server(415): Started @35773ms 2023-07-22 07:10:53,385 INFO [Listener at localhost/37479] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-22 07:10:53,385 INFO [Listener at localhost/37479] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-22 07:10:53,386 INFO [Listener at localhost/37479] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-22 07:10:53,386 INFO [Listener at localhost/37479] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-22 07:10:53,386 INFO [Listener at localhost/37479] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-22 07:10:53,386 INFO [Listener at localhost/37479] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-22 07:10:53,386 INFO [Listener at localhost/37479] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-22 07:10:53,389 INFO [Listener at localhost/37479] ipc.NettyRpcServer(120): Bind to /172.31.14.131:45671 2023-07-22 07:10:53,390 INFO [Listener at localhost/37479] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-22 07:10:53,393 DEBUG [Listener at localhost/37479] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-22 07:10:53,394 INFO [Listener at localhost/37479] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-22 07:10:53,395 INFO [Listener at localhost/37479] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-22 07:10:53,396 INFO [Listener at localhost/37479] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:45671 connecting to ZooKeeper ensemble=127.0.0.1:56037 2023-07-22 07:10:53,400 DEBUG [Listener at localhost/37479-EventThread] zookeeper.ZKWatcher(600): regionserver:456710x0, quorum=127.0.0.1:56037, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-22 07:10:53,402 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:45671-0x1018bde5f1c0002 connected 2023-07-22 07:10:53,402 DEBUG [Listener at localhost/37479] zookeeper.ZKUtil(164): regionserver:45671-0x1018bde5f1c0002, quorum=127.0.0.1:56037, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-22 07:10:53,403 DEBUG [Listener at localhost/37479] zookeeper.ZKUtil(164): regionserver:45671-0x1018bde5f1c0002, quorum=127.0.0.1:56037, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-22 07:10:53,403 DEBUG [Listener at localhost/37479] zookeeper.ZKUtil(164): regionserver:45671-0x1018bde5f1c0002, quorum=127.0.0.1:56037, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-22 07:10:53,407 DEBUG [Listener at localhost/37479] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=45671 2023-07-22 07:10:53,407 DEBUG [Listener at localhost/37479] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=45671 2023-07-22 07:10:53,410 DEBUG [Listener at localhost/37479] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=45671 2023-07-22 07:10:53,418 DEBUG [Listener at localhost/37479] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=45671 2023-07-22 07:10:53,430 DEBUG [Listener at localhost/37479] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=45671 2023-07-22 07:10:53,433 INFO [Listener at localhost/37479] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-22 07:10:53,433 INFO [Listener at localhost/37479] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-22 07:10:53,433 INFO [Listener at localhost/37479] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-22 07:10:53,434 INFO [Listener at localhost/37479] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-22 07:10:53,434 INFO [Listener at localhost/37479] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-22 07:10:53,434 INFO [Listener at localhost/37479] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-22 07:10:53,434 INFO [Listener at localhost/37479] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-22 07:10:53,435 INFO [Listener at localhost/37479] http.HttpServer(1146): Jetty bound to port 41781 2023-07-22 07:10:53,435 INFO [Listener at localhost/37479] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-22 07:10:53,438 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(152): Removing adapter for the MetricRegistry: RegionServer,sub=Coprocessor.Region.CP_org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-22 07:10:53,438 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(152): Removing adapter for the MetricRegistry: Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver 2023-07-22 07:10:53,438 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(152): Removing adapter for the MetricRegistry: Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint 2023-07-22 07:10:53,442 INFO [Listener at localhost/37479] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-22 07:10:53,442 INFO [Listener at localhost/37479] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@ff264a2{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/3e189d44-8ba9-11ba-8a1b-a3392a171034/hadoop.log.dir/,AVAILABLE} 2023-07-22 07:10:53,443 INFO [Listener at localhost/37479] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-22 07:10:53,443 INFO [Listener at localhost/37479] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@4fbf3e9c{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-22 07:10:53,570 INFO [Listener at localhost/37479] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-22 07:10:53,572 INFO [Listener at localhost/37479] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-22 07:10:53,572 INFO [Listener at localhost/37479] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-22 07:10:53,572 INFO [Listener at localhost/37479] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-22 07:10:53,573 INFO [Listener at localhost/37479] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-22 07:10:53,574 INFO [Listener at localhost/37479] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@2ca11e70{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/3e189d44-8ba9-11ba-8a1b-a3392a171034/java.io.tmpdir/jetty-0_0_0_0-41781-hbase-server-2_4_18-SNAPSHOT_jar-_-any-5430620350291168218/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-22 07:10:53,575 INFO [Listener at localhost/37479] server.AbstractConnector(333): Started ServerConnector@36c8a553{HTTP/1.1, (http/1.1)}{0.0.0.0:41781} 2023-07-22 07:10:53,575 INFO [Listener at localhost/37479] server.Server(415): Started @35980ms 2023-07-22 07:10:53,586 INFO [Listener at localhost/37479] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-22 07:10:53,587 INFO [Listener at localhost/37479] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-22 07:10:53,587 INFO [Listener at localhost/37479] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-22 07:10:53,587 INFO [Listener at localhost/37479] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-22 07:10:53,587 INFO [Listener at localhost/37479] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-22 07:10:53,587 INFO [Listener at localhost/37479] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-22 07:10:53,587 INFO [Listener at localhost/37479] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-22 07:10:53,588 INFO [Listener at localhost/37479] ipc.NettyRpcServer(120): Bind to /172.31.14.131:41875 2023-07-22 07:10:53,588 INFO [Listener at localhost/37479] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-22 07:10:53,589 DEBUG [Listener at localhost/37479] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-22 07:10:53,590 INFO [Listener at localhost/37479] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-22 07:10:53,591 INFO [Listener at localhost/37479] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-22 07:10:53,592 INFO [Listener at localhost/37479] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:41875 connecting to ZooKeeper ensemble=127.0.0.1:56037 2023-07-22 07:10:53,595 DEBUG [Listener at localhost/37479-EventThread] zookeeper.ZKWatcher(600): regionserver:418750x0, quorum=127.0.0.1:56037, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-22 07:10:53,598 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:41875-0x1018bde5f1c0003 connected 2023-07-22 07:10:53,598 DEBUG [Listener at localhost/37479] zookeeper.ZKUtil(164): regionserver:41875-0x1018bde5f1c0003, quorum=127.0.0.1:56037, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-22 07:10:53,598 DEBUG [Listener at localhost/37479] zookeeper.ZKUtil(164): regionserver:41875-0x1018bde5f1c0003, quorum=127.0.0.1:56037, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-22 07:10:53,599 DEBUG [Listener at localhost/37479] zookeeper.ZKUtil(164): regionserver:41875-0x1018bde5f1c0003, quorum=127.0.0.1:56037, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-22 07:10:53,599 DEBUG [Listener at localhost/37479] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=41875 2023-07-22 07:10:53,599 DEBUG [Listener at localhost/37479] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=41875 2023-07-22 07:10:53,602 DEBUG [Listener at localhost/37479] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=41875 2023-07-22 07:10:53,605 DEBUG [Listener at localhost/37479] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=41875 2023-07-22 07:10:53,605 DEBUG [Listener at localhost/37479] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=41875 2023-07-22 07:10:53,608 INFO [Listener at localhost/37479] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-22 07:10:53,608 INFO [Listener at localhost/37479] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-22 07:10:53,608 INFO [Listener at localhost/37479] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-22 07:10:53,609 INFO [Listener at localhost/37479] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-22 07:10:53,609 INFO [Listener at localhost/37479] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-22 07:10:53,609 INFO [Listener at localhost/37479] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-22 07:10:53,609 INFO [Listener at localhost/37479] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-22 07:10:53,610 INFO [Listener at localhost/37479] http.HttpServer(1146): Jetty bound to port 42685 2023-07-22 07:10:53,610 INFO [Listener at localhost/37479] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-22 07:10:53,615 INFO [Listener at localhost/37479] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-22 07:10:53,615 INFO [Listener at localhost/37479] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@e8a3f97{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/3e189d44-8ba9-11ba-8a1b-a3392a171034/hadoop.log.dir/,AVAILABLE} 2023-07-22 07:10:53,616 INFO [Listener at localhost/37479] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-22 07:10:53,616 INFO [Listener at localhost/37479] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@53586b{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-22 07:10:53,748 INFO [Listener at localhost/37479] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-22 07:10:53,749 INFO [Listener at localhost/37479] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-22 07:10:53,749 INFO [Listener at localhost/37479] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-22 07:10:53,749 INFO [Listener at localhost/37479] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-22 07:10:53,752 INFO [Listener at localhost/37479] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-22 07:10:53,753 INFO [Listener at localhost/37479] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@2b9ac065{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/3e189d44-8ba9-11ba-8a1b-a3392a171034/java.io.tmpdir/jetty-0_0_0_0-42685-hbase-server-2_4_18-SNAPSHOT_jar-_-any-7398254864669116602/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-22 07:10:53,754 INFO [Listener at localhost/37479] server.AbstractConnector(333): Started ServerConnector@125559c0{HTTP/1.1, (http/1.1)}{0.0.0.0:42685} 2023-07-22 07:10:53,754 INFO [Listener at localhost/37479] server.Server(415): Started @36160ms 2023-07-22 07:10:53,760 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-22 07:10:53,770 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.AbstractConnector(333): Started ServerConnector@66b08be9{HTTP/1.1, (http/1.1)}{0.0.0.0:34747} 2023-07-22 07:10:53,770 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.Server(415): Started @36175ms 2023-07-22 07:10:53,770 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2168): Adding backup master ZNode /hbase/backup-masters/jenkins-hbase4.apache.org,34769,1690009852974 2023-07-22 07:10:53,771 DEBUG [Listener at localhost/37479-EventThread] zookeeper.ZKWatcher(600): master:34769-0x1018bde5f1c0000, quorum=127.0.0.1:56037, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-07-22 07:10:53,772 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:34769-0x1018bde5f1c0000, quorum=127.0.0.1:56037, baseZNode=/hbase Set watcher on existing znode=/hbase/backup-masters/jenkins-hbase4.apache.org,34769,1690009852974 2023-07-22 07:10:53,774 DEBUG [Listener at localhost/37479-EventThread] zookeeper.ZKWatcher(600): regionserver:33331-0x1018bde5f1c0001, quorum=127.0.0.1:56037, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-22 07:10:53,774 DEBUG [Listener at localhost/37479-EventThread] zookeeper.ZKWatcher(600): master:34769-0x1018bde5f1c0000, quorum=127.0.0.1:56037, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-22 07:10:53,774 DEBUG [Listener at localhost/37479-EventThread] zookeeper.ZKWatcher(600): regionserver:45671-0x1018bde5f1c0002, quorum=127.0.0.1:56037, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-22 07:10:53,775 DEBUG [Listener at localhost/37479-EventThread] zookeeper.ZKWatcher(600): master:34769-0x1018bde5f1c0000, quorum=127.0.0.1:56037, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-22 07:10:53,776 DEBUG [Listener at localhost/37479-EventThread] zookeeper.ZKWatcher(600): regionserver:41875-0x1018bde5f1c0003, quorum=127.0.0.1:56037, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-22 07:10:53,777 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:34769-0x1018bde5f1c0000, quorum=127.0.0.1:56037, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-22 07:10:53,778 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(227): Deleting ZNode for /hbase/backup-masters/jenkins-hbase4.apache.org,34769,1690009852974 from backup master directory 2023-07-22 07:10:53,778 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): master:34769-0x1018bde5f1c0000, quorum=127.0.0.1:56037, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-22 07:10:53,779 DEBUG [Listener at localhost/37479-EventThread] zookeeper.ZKWatcher(600): master:34769-0x1018bde5f1c0000, quorum=127.0.0.1:56037, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/backup-masters/jenkins-hbase4.apache.org,34769,1690009852974 2023-07-22 07:10:53,779 DEBUG [Listener at localhost/37479-EventThread] zookeeper.ZKWatcher(600): master:34769-0x1018bde5f1c0000, quorum=127.0.0.1:56037, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-07-22 07:10:53,779 WARN [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-22 07:10:53,780 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(237): Registered as active master=jenkins-hbase4.apache.org,34769,1690009852974 2023-07-22 07:10:53,807 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] util.FSUtils(620): Created cluster ID file at hdfs://localhost:45035/user/jenkins/test-data/563fc294-b7e0-2735-0ff8-3c56829b5c0e/hbase.id with ID: 54348c53-d975-4603-991d-703bd94abf42 2023-07-22 07:10:53,820 INFO [master/jenkins-hbase4:0:becomeActiveMaster] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-22 07:10:53,823 DEBUG [Listener at localhost/37479-EventThread] zookeeper.ZKWatcher(600): master:34769-0x1018bde5f1c0000, quorum=127.0.0.1:56037, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-22 07:10:53,833 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ReadOnlyZKClient(139): Connect 0x59e053e0 to 127.0.0.1:56037 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-22 07:10:53,837 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@1dcbcf2d, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-22 07:10:53,837 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(309): Create or load local region for table 'master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-22 07:10:53,837 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(132): Injected flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000 2023-07-22 07:10:53,838 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-22 07:10:53,839 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7693): Creating {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, under table dir hdfs://localhost:45035/user/jenkins/test-data/563fc294-b7e0-2735-0ff8-3c56829b5c0e/MasterData/data/master/store-tmp 2023-07-22 07:10:53,866 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-22 07:10:53,866 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-07-22 07:10:53,866 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-22 07:10:53,866 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-22 07:10:53,866 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-07-22 07:10:53,866 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-22 07:10:53,866 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-22 07:10:53,866 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-22 07:10:53,867 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(191): WALDir=hdfs://localhost:45035/user/jenkins/test-data/563fc294-b7e0-2735-0ff8-3c56829b5c0e/MasterData/WALs/jenkins-hbase4.apache.org,34769,1690009852974 2023-07-22 07:10:53,870 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C34769%2C1690009852974, suffix=, logDir=hdfs://localhost:45035/user/jenkins/test-data/563fc294-b7e0-2735-0ff8-3c56829b5c0e/MasterData/WALs/jenkins-hbase4.apache.org,34769,1690009852974, archiveDir=hdfs://localhost:45035/user/jenkins/test-data/563fc294-b7e0-2735-0ff8-3c56829b5c0e/MasterData/oldWALs, maxLogs=10 2023-07-22 07:10:53,888 DEBUG [RS-EventLoopGroup-11-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:38717,DS-1963c017-acbd-4f2e-9a11-b62fcb87037a,DISK] 2023-07-22 07:10:53,890 DEBUG [RS-EventLoopGroup-11-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:46223,DS-d15232c5-7700-4629-b936-7e8939fa5ac2,DISK] 2023-07-22 07:10:53,890 DEBUG [RS-EventLoopGroup-11-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:42667,DS-77b63c19-a1de-460a-b32e-ac496b2ef823,DISK] 2023-07-22 07:10:53,896 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/563fc294-b7e0-2735-0ff8-3c56829b5c0e/MasterData/WALs/jenkins-hbase4.apache.org,34769,1690009852974/jenkins-hbase4.apache.org%2C34769%2C1690009852974.1690009853870 2023-07-22 07:10:53,896 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:38717,DS-1963c017-acbd-4f2e-9a11-b62fcb87037a,DISK], DatanodeInfoWithStorage[127.0.0.1:46223,DS-d15232c5-7700-4629-b936-7e8939fa5ac2,DISK], DatanodeInfoWithStorage[127.0.0.1:42667,DS-77b63c19-a1de-460a-b32e-ac496b2ef823,DISK]] 2023-07-22 07:10:53,896 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7854): Opening region: {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''} 2023-07-22 07:10:53,897 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-22 07:10:53,897 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7894): checking encryption for 1595e783b53d99cd5eef43b6debb2682 2023-07-22 07:10:53,897 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7897): checking classloading for 1595e783b53d99cd5eef43b6debb2682 2023-07-22 07:10:53,899 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family proc of region 1595e783b53d99cd5eef43b6debb2682 2023-07-22 07:10:53,901 DEBUG [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:45035/user/jenkins/test-data/563fc294-b7e0-2735-0ff8-3c56829b5c0e/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc 2023-07-22 07:10:53,902 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1595e783b53d99cd5eef43b6debb2682 columnFamilyName proc 2023-07-22 07:10:53,902 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(310): Store=1595e783b53d99cd5eef43b6debb2682/proc, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-22 07:10:53,903 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:45035/user/jenkins/test-data/563fc294-b7e0-2735-0ff8-3c56829b5c0e/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-07-22 07:10:53,904 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:45035/user/jenkins/test-data/563fc294-b7e0-2735-0ff8-3c56829b5c0e/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-07-22 07:10:53,907 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1055): writing seq id for 1595e783b53d99cd5eef43b6debb2682 2023-07-22 07:10:53,914 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:45035/user/jenkins/test-data/563fc294-b7e0-2735-0ff8-3c56829b5c0e/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-22 07:10:53,914 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1072): Opened 1595e783b53d99cd5eef43b6debb2682; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11879046080, jitterRate=0.10632237792015076}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-22 07:10:53,914 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(965): Region open journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-22 07:10:53,914 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(122): Constructor flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000, compactMin=4 2023-07-22 07:10:53,916 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(104): Starting the Region Procedure Store, number threads=5 2023-07-22 07:10:53,916 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(562): Starting 5 core workers (bigger of cpus/4 or 16) with max (burst) worker count=50 2023-07-22 07:10:53,916 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(255): Starting Region Procedure Store lease recovery... 2023-07-22 07:10:53,916 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(582): Recovered RegionProcedureStore lease in 0 msec 2023-07-22 07:10:53,917 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(596): Loaded RegionProcedureStore in 0 msec 2023-07-22 07:10:53,917 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.RemoteProcedureDispatcher(96): Instantiated, coreThreads=3 (allowCoreThreadTimeOut=true), queueMaxSize=32, operationDelay=150 2023-07-22 07:10:53,918 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(253): hbase:meta replica znodes: [] 2023-07-22 07:10:53,919 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.RegionServerTracker(124): Starting RegionServerTracker; 0 have existing ServerCrashProcedures, 0 possibly 'live' servers, and 0 'splitting'. 2023-07-22 07:10:53,919 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:34769-0x1018bde5f1c0000, quorum=127.0.0.1:56037, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/balancer 2023-07-22 07:10:53,920 INFO [master/jenkins-hbase4:0:becomeActiveMaster] normalizer.RegionNormalizerWorker(118): Normalizer rate limit set to unlimited 2023-07-22 07:10:53,920 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:34769-0x1018bde5f1c0000, quorum=127.0.0.1:56037, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/normalizer 2023-07-22 07:10:53,922 DEBUG [Listener at localhost/37479-EventThread] zookeeper.ZKWatcher(600): master:34769-0x1018bde5f1c0000, quorum=127.0.0.1:56037, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-22 07:10:53,923 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:34769-0x1018bde5f1c0000, quorum=127.0.0.1:56037, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/split 2023-07-22 07:10:53,923 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:34769-0x1018bde5f1c0000, quorum=127.0.0.1:56037, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/merge 2023-07-22 07:10:53,924 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:34769-0x1018bde5f1c0000, quorum=127.0.0.1:56037, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/snapshot-cleanup 2023-07-22 07:10:53,925 DEBUG [Listener at localhost/37479-EventThread] zookeeper.ZKWatcher(600): regionserver:41875-0x1018bde5f1c0003, quorum=127.0.0.1:56037, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-22 07:10:53,925 DEBUG [Listener at localhost/37479-EventThread] zookeeper.ZKWatcher(600): regionserver:33331-0x1018bde5f1c0001, quorum=127.0.0.1:56037, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-22 07:10:53,925 DEBUG [Listener at localhost/37479-EventThread] zookeeper.ZKWatcher(600): master:34769-0x1018bde5f1c0000, quorum=127.0.0.1:56037, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-22 07:10:53,925 DEBUG [Listener at localhost/37479-EventThread] zookeeper.ZKWatcher(600): master:34769-0x1018bde5f1c0000, quorum=127.0.0.1:56037, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-22 07:10:53,926 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(744): Active/primary master=jenkins-hbase4.apache.org,34769,1690009852974, sessionid=0x1018bde5f1c0000, setting cluster-up flag (Was=false) 2023-07-22 07:10:53,925 DEBUG [Listener at localhost/37479-EventThread] zookeeper.ZKWatcher(600): regionserver:45671-0x1018bde5f1c0002, quorum=127.0.0.1:56037, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-22 07:10:53,931 DEBUG [Listener at localhost/37479-EventThread] zookeeper.ZKWatcher(600): master:34769-0x1018bde5f1c0000, quorum=127.0.0.1:56037, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-22 07:10:53,936 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/flush-table-proc/acquired, /hbase/flush-table-proc/reached, /hbase/flush-table-proc/abort 2023-07-22 07:10:53,937 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,34769,1690009852974 2023-07-22 07:10:53,940 DEBUG [Listener at localhost/37479-EventThread] zookeeper.ZKWatcher(600): master:34769-0x1018bde5f1c0000, quorum=127.0.0.1:56037, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-22 07:10:53,944 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/online-snapshot/acquired, /hbase/online-snapshot/reached, /hbase/online-snapshot/abort 2023-07-22 07:10:53,945 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,34769,1690009852974 2023-07-22 07:10:53,945 WARN [master/jenkins-hbase4:0:becomeActiveMaster] snapshot.SnapshotManager(302): Couldn't delete working snapshot directory: hdfs://localhost:45035/user/jenkins/test-data/563fc294-b7e0-2735-0ff8-3c56829b5c0e/.hbase-snapshot/.tmp 2023-07-22 07:10:53,948 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2930): Registered master coprocessor service: service=RSGroupAdminService 2023-07-22 07:10:53,948 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] rsgroup.RSGroupInfoManagerImpl(537): Refreshing in Offline mode. 2023-07-22 07:10:53,948 INFO [master/jenkins-hbase4:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint loaded, priority=536870911. 2023-07-22 07:10:53,949 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,34769,1690009852974] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-22 07:10:53,950 INFO [master/jenkins-hbase4:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver loaded, priority=536870912. 2023-07-22 07:10:53,950 INFO [master/jenkins-hbase4:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.quotas.MasterQuotasObserver loaded, priority=536870913. 2023-07-22 07:10:53,951 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT; InitMetaProcedure table=hbase:meta 2023-07-22 07:10:53,960 INFO [RS:0;jenkins-hbase4:33331] regionserver.HRegionServer(951): ClusterId : 54348c53-d975-4603-991d-703bd94abf42 2023-07-22 07:10:53,960 INFO [RS:1;jenkins-hbase4:45671] regionserver.HRegionServer(951): ClusterId : 54348c53-d975-4603-991d-703bd94abf42 2023-07-22 07:10:53,961 DEBUG [RS:0;jenkins-hbase4:33331] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-22 07:10:53,966 DEBUG [RS:0;jenkins-hbase4:33331] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-22 07:10:53,966 DEBUG [RS:0;jenkins-hbase4:33331] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-22 07:10:53,968 DEBUG [RS:0;jenkins-hbase4:33331] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-22 07:10:53,960 INFO [RS:2;jenkins-hbase4:41875] regionserver.HRegionServer(951): ClusterId : 54348c53-d975-4603-991d-703bd94abf42 2023-07-22 07:10:53,964 DEBUG [RS:1;jenkins-hbase4:45671] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-22 07:10:53,971 DEBUG [RS:2;jenkins-hbase4:41875] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-22 07:10:53,971 DEBUG [RS:0;jenkins-hbase4:33331] zookeeper.ReadOnlyZKClient(139): Connect 0x1557a427 to 127.0.0.1:56037 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-22 07:10:53,973 DEBUG [RS:1;jenkins-hbase4:45671] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-22 07:10:53,973 DEBUG [RS:1;jenkins-hbase4:45671] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-22 07:10:53,973 DEBUG [RS:2;jenkins-hbase4:41875] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-22 07:10:53,973 DEBUG [RS:2;jenkins-hbase4:41875] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-22 07:10:53,975 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-07-22 07:10:53,975 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-07-22 07:10:53,975 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-07-22 07:10:53,975 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-07-22 07:10:53,975 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_OPEN_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-22 07:10:53,975 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_CLOSE_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-22 07:10:53,976 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-22 07:10:53,976 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_META_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-22 07:10:53,976 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=M_LOG_REPLAY_OPS-master/jenkins-hbase4:0, corePoolSize=10, maxPoolSize=10 2023-07-22 07:10:53,976 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-22 07:10:53,976 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_MERGE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-22 07:10:53,976 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_TABLE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-22 07:10:53,976 DEBUG [RS:1;jenkins-hbase4:45671] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-22 07:10:53,977 DEBUG [RS:1;jenkins-hbase4:45671] zookeeper.ReadOnlyZKClient(139): Connect 0x3345ab29 to 127.0.0.1:56037 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-22 07:10:53,980 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner; timeout=30000, timestamp=1690009883980 2023-07-22 07:10:53,980 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): log_cleaner Cleaner pool size is 1 2023-07-22 07:10:53,980 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveLogCleaner 2023-07-22 07:10:53,981 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.replication.master.ReplicationLogCleaner 2023-07-22 07:10:53,981 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreWALCleaner 2023-07-22 07:10:53,981 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveProcedureWALCleaner 2023-07-22 07:10:53,981 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.LogCleaner(148): Creating 1 old WALs cleaner threads 2023-07-22 07:10:53,982 DEBUG [RS:2;jenkins-hbase4:41875] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-22 07:10:53,982 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT, locked=true; InitMetaProcedure table=hbase:meta 2023-07-22 07:10:53,989 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=LogsCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-07-22 07:10:53,991 INFO [PEWorker-1] procedure.InitMetaProcedure(71): BOOTSTRAP: creating hbase:meta region 2023-07-22 07:10:53,983 DEBUG [RS:2;jenkins-hbase4:41875] zookeeper.ReadOnlyZKClient(139): Connect 0x54f63bf8 to 127.0.0.1:56037 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-22 07:10:53,991 DEBUG [RS:0;jenkins-hbase4:33331] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@1b9058fd, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-22 07:10:53,992 DEBUG [RS:0;jenkins-hbase4:33331] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@1fafcf57, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-22 07:10:53,992 INFO [PEWorker-1] util.FSTableDescriptors(128): Creating new hbase:meta table descriptor 'hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-07-22 07:10:54,002 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): hfile_cleaner Cleaner pool size is 2 2023-07-22 07:10:54,002 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreHFileCleaner 2023-07-22 07:10:54,002 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.HFileLinkCleaner 2023-07-22 07:10:54,008 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.snapshot.SnapshotHFileCleaner 2023-07-22 07:10:54,009 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveHFileCleaner 2023-07-22 07:10:54,009 DEBUG [RS:1;jenkins-hbase4:45671] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@39bdc059, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-22 07:10:54,009 DEBUG [RS:1;jenkins-hbase4:45671] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@7cbf590a, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-22 07:10:54,010 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(242): Starting for large file=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1690009854009,5,FailOnTimeoutGroup] 2023-07-22 07:10:54,017 DEBUG [RS:2;jenkins-hbase4:41875] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@5a31df16, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-22 07:10:54,019 DEBUG [RS:2;jenkins-hbase4:41875] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@5e0e5088, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-22 07:10:54,020 DEBUG [RS:1;jenkins-hbase4:45671] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:1;jenkins-hbase4:45671 2023-07-22 07:10:54,020 INFO [RS:1;jenkins-hbase4:45671] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-22 07:10:54,020 INFO [RS:1;jenkins-hbase4:45671] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-22 07:10:54,020 DEBUG [RS:1;jenkins-hbase4:45671] regionserver.HRegionServer(1022): About to register with Master. 2023-07-22 07:10:54,021 INFO [RS:1;jenkins-hbase4:45671] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,34769,1690009852974 with isa=jenkins-hbase4.apache.org/172.31.14.131:45671, startcode=1690009853385 2023-07-22 07:10:54,021 DEBUG [RS:1;jenkins-hbase4:45671] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-22 07:10:54,021 DEBUG [RS:0;jenkins-hbase4:33331] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:0;jenkins-hbase4:33331 2023-07-22 07:10:54,021 INFO [RS:0;jenkins-hbase4:33331] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-22 07:10:54,021 INFO [RS:0;jenkins-hbase4:33331] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-22 07:10:54,021 DEBUG [RS:0;jenkins-hbase4:33331] regionserver.HRegionServer(1022): About to register with Master. 2023-07-22 07:10:54,022 INFO [RS:0;jenkins-hbase4:33331] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,34769,1690009852974 with isa=jenkins-hbase4.apache.org/172.31.14.131:33331, startcode=1690009853171 2023-07-22 07:10:54,022 DEBUG [RS:0;jenkins-hbase4:33331] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-22 07:10:54,032 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(257): Starting for small files=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1690009854032,5,FailOnTimeoutGroup] 2023-07-22 07:10:54,032 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HFileCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-07-22 07:10:54,032 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1461): Reopening regions with very high storeFileRefCount is disabled. Provide threshold value > 0 for hbase.regions.recovery.store.file.ref.count to enable it. 2023-07-22 07:10:54,032 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=ReplicationBarrierCleaner, period=43200000, unit=MILLISECONDS is enabled. 2023-07-22 07:10:54,032 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=SnapshotCleaner, period=1800000, unit=MILLISECONDS is enabled. 2023-07-22 07:10:54,035 INFO [RS-EventLoopGroup-8-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:40003, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.4 (auth:SIMPLE), service=RegionServerStatusService 2023-07-22 07:10:54,038 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=34769] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,33331,1690009853171 2023-07-22 07:10:54,038 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,34769,1690009852974] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-22 07:10:54,039 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,34769,1690009852974] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 1 2023-07-22 07:10:54,039 DEBUG [RS:0;jenkins-hbase4:33331] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:45035/user/jenkins/test-data/563fc294-b7e0-2735-0ff8-3c56829b5c0e 2023-07-22 07:10:54,040 DEBUG [RS:0;jenkins-hbase4:33331] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:45035 2023-07-22 07:10:54,040 DEBUG [RS:0;jenkins-hbase4:33331] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=38307 2023-07-22 07:10:54,041 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost:45035/user/jenkins/test-data/563fc294-b7e0-2735-0ff8-3c56829b5c0e/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-07-22 07:10:54,041 DEBUG [Listener at localhost/37479-EventThread] zookeeper.ZKWatcher(600): master:34769-0x1018bde5f1c0000, quorum=127.0.0.1:56037, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-22 07:10:54,042 INFO [PEWorker-1] util.FSTableDescriptors(135): Updated hbase:meta table descriptor to hdfs://localhost:45035/user/jenkins/test-data/563fc294-b7e0-2735-0ff8-3c56829b5c0e/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-07-22 07:10:54,042 INFO [PEWorker-1] regionserver.HRegion(7675): creating {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:45035/user/jenkins/test-data/563fc294-b7e0-2735-0ff8-3c56829b5c0e 2023-07-22 07:10:54,045 DEBUG [RS:2;jenkins-hbase4:41875] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:2;jenkins-hbase4:41875 2023-07-22 07:10:54,045 INFO [RS:2;jenkins-hbase4:41875] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-22 07:10:54,045 INFO [RS:2;jenkins-hbase4:41875] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-22 07:10:54,045 DEBUG [RS:2;jenkins-hbase4:41875] regionserver.HRegionServer(1022): About to register with Master. 2023-07-22 07:10:54,045 INFO [RS-EventLoopGroup-8-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:55741, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.5 (auth:SIMPLE), service=RegionServerStatusService 2023-07-22 07:10:54,046 DEBUG [RS:0;jenkins-hbase4:33331] zookeeper.ZKUtil(162): regionserver:33331-0x1018bde5f1c0001, quorum=127.0.0.1:56037, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,33331,1690009853171 2023-07-22 07:10:54,046 WARN [RS:0;jenkins-hbase4:33331] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-22 07:10:54,046 INFO [RS:2;jenkins-hbase4:41875] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,34769,1690009852974 with isa=jenkins-hbase4.apache.org/172.31.14.131:41875, startcode=1690009853586 2023-07-22 07:10:54,046 INFO [RS:0;jenkins-hbase4:33331] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-22 07:10:54,046 DEBUG [RS:2;jenkins-hbase4:41875] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-22 07:10:54,046 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=34769] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,45671,1690009853385 2023-07-22 07:10:54,046 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,34769,1690009852974] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-22 07:10:54,046 DEBUG [RS:0;jenkins-hbase4:33331] regionserver.HRegionServer(1948): logDir=hdfs://localhost:45035/user/jenkins/test-data/563fc294-b7e0-2735-0ff8-3c56829b5c0e/WALs/jenkins-hbase4.apache.org,33331,1690009853171 2023-07-22 07:10:54,046 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,34769,1690009852974] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 2 2023-07-22 07:10:54,047 DEBUG [RS:1;jenkins-hbase4:45671] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:45035/user/jenkins/test-data/563fc294-b7e0-2735-0ff8-3c56829b5c0e 2023-07-22 07:10:54,047 DEBUG [RS:1;jenkins-hbase4:45671] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:45035 2023-07-22 07:10:54,047 DEBUG [RS:1;jenkins-hbase4:45671] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=38307 2023-07-22 07:10:54,048 INFO [RS-EventLoopGroup-8-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:35937, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.6 (auth:SIMPLE), service=RegionServerStatusService 2023-07-22 07:10:54,050 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=34769] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,41875,1690009853586 2023-07-22 07:10:54,050 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,34769,1690009852974] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-22 07:10:54,051 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,34769,1690009852974] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 3 2023-07-22 07:10:54,052 DEBUG [RS:2;jenkins-hbase4:41875] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:45035/user/jenkins/test-data/563fc294-b7e0-2735-0ff8-3c56829b5c0e 2023-07-22 07:10:54,052 DEBUG [RS:2;jenkins-hbase4:41875] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:45035 2023-07-22 07:10:54,052 DEBUG [RS:2;jenkins-hbase4:41875] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=38307 2023-07-22 07:10:54,053 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,45671,1690009853385] 2023-07-22 07:10:54,054 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,33331,1690009853171] 2023-07-22 07:10:54,054 DEBUG [RS:1;jenkins-hbase4:45671] zookeeper.ZKUtil(162): regionserver:45671-0x1018bde5f1c0002, quorum=127.0.0.1:56037, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,45671,1690009853385 2023-07-22 07:10:54,054 WARN [RS:1;jenkins-hbase4:45671] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-22 07:10:54,054 INFO [RS:1;jenkins-hbase4:45671] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-22 07:10:54,054 DEBUG [RS:1;jenkins-hbase4:45671] regionserver.HRegionServer(1948): logDir=hdfs://localhost:45035/user/jenkins/test-data/563fc294-b7e0-2735-0ff8-3c56829b5c0e/WALs/jenkins-hbase4.apache.org,45671,1690009853385 2023-07-22 07:10:54,054 DEBUG [Listener at localhost/37479-EventThread] zookeeper.ZKWatcher(600): master:34769-0x1018bde5f1c0000, quorum=127.0.0.1:56037, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-22 07:10:54,055 DEBUG [RS:2;jenkins-hbase4:41875] zookeeper.ZKUtil(162): regionserver:41875-0x1018bde5f1c0003, quorum=127.0.0.1:56037, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,41875,1690009853586 2023-07-22 07:10:54,055 WARN [RS:2;jenkins-hbase4:41875] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-22 07:10:54,055 INFO [RS:2;jenkins-hbase4:41875] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-22 07:10:54,055 DEBUG [RS:2;jenkins-hbase4:41875] regionserver.HRegionServer(1948): logDir=hdfs://localhost:45035/user/jenkins/test-data/563fc294-b7e0-2735-0ff8-3c56829b5c0e/WALs/jenkins-hbase4.apache.org,41875,1690009853586 2023-07-22 07:10:54,055 DEBUG [RS:0;jenkins-hbase4:33331] zookeeper.ZKUtil(162): regionserver:33331-0x1018bde5f1c0001, quorum=127.0.0.1:56037, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,45671,1690009853385 2023-07-22 07:10:54,063 DEBUG [RS:0;jenkins-hbase4:33331] zookeeper.ZKUtil(162): regionserver:33331-0x1018bde5f1c0001, quorum=127.0.0.1:56037, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,33331,1690009853171 2023-07-22 07:10:54,068 DEBUG [RS:0;jenkins-hbase4:33331] zookeeper.ZKUtil(162): regionserver:33331-0x1018bde5f1c0001, quorum=127.0.0.1:56037, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,41875,1690009853586 2023-07-22 07:10:54,068 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,41875,1690009853586] 2023-07-22 07:10:54,070 DEBUG [RS:1;jenkins-hbase4:45671] zookeeper.ZKUtil(162): regionserver:45671-0x1018bde5f1c0002, quorum=127.0.0.1:56037, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,45671,1690009853385 2023-07-22 07:10:54,070 DEBUG [RS:0;jenkins-hbase4:33331] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-22 07:10:54,071 INFO [RS:0;jenkins-hbase4:33331] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-22 07:10:54,072 DEBUG [RS:1;jenkins-hbase4:45671] zookeeper.ZKUtil(162): regionserver:45671-0x1018bde5f1c0002, quorum=127.0.0.1:56037, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,33331,1690009853171 2023-07-22 07:10:54,072 DEBUG [RS:1;jenkins-hbase4:45671] zookeeper.ZKUtil(162): regionserver:45671-0x1018bde5f1c0002, quorum=127.0.0.1:56037, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,41875,1690009853586 2023-07-22 07:10:54,075 DEBUG [RS:1;jenkins-hbase4:45671] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-22 07:10:54,075 INFO [RS:1;jenkins-hbase4:45671] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-22 07:10:54,077 DEBUG [RS:2;jenkins-hbase4:41875] zookeeper.ZKUtil(162): regionserver:41875-0x1018bde5f1c0003, quorum=127.0.0.1:56037, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,45671,1690009853385 2023-07-22 07:10:54,078 DEBUG [RS:2;jenkins-hbase4:41875] zookeeper.ZKUtil(162): regionserver:41875-0x1018bde5f1c0003, quorum=127.0.0.1:56037, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,33331,1690009853171 2023-07-22 07:10:54,086 INFO [RS:1;jenkins-hbase4:45671] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-22 07:10:54,079 INFO [RS:0;jenkins-hbase4:33331] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-22 07:10:54,087 DEBUG [RS:2;jenkins-hbase4:41875] zookeeper.ZKUtil(162): regionserver:41875-0x1018bde5f1c0003, quorum=127.0.0.1:56037, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,41875,1690009853586 2023-07-22 07:10:54,088 DEBUG [RS:2;jenkins-hbase4:41875] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-22 07:10:54,088 INFO [RS:2;jenkins-hbase4:41875] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-22 07:10:54,091 INFO [RS:0;jenkins-hbase4:33331] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-22 07:10:54,091 INFO [RS:0;jenkins-hbase4:33331] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-22 07:10:54,094 INFO [RS:2;jenkins-hbase4:41875] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-22 07:10:54,095 INFO [RS:1;jenkins-hbase4:45671] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-22 07:10:54,095 INFO [RS:1;jenkins-hbase4:45671] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-22 07:10:54,095 INFO [RS:0;jenkins-hbase4:33331] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-22 07:10:54,097 INFO [RS:1;jenkins-hbase4:45671] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-22 07:10:54,095 INFO [RS:2;jenkins-hbase4:41875] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-22 07:10:54,097 INFO [RS:2;jenkins-hbase4:41875] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-22 07:10:54,102 INFO [RS:2;jenkins-hbase4:41875] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-22 07:10:54,103 INFO [RS:0;jenkins-hbase4:33331] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-22 07:10:54,103 INFO [RS:1;jenkins-hbase4:45671] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-22 07:10:54,103 DEBUG [RS:0;jenkins-hbase4:33331] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-22 07:10:54,104 DEBUG [RS:0;jenkins-hbase4:33331] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-22 07:10:54,104 DEBUG [RS:1;jenkins-hbase4:45671] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-22 07:10:54,104 DEBUG [RS:1;jenkins-hbase4:45671] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-22 07:10:54,104 DEBUG [RS:1;jenkins-hbase4:45671] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-22 07:10:54,104 DEBUG [RS:1;jenkins-hbase4:45671] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-22 07:10:54,104 DEBUG [RS:1;jenkins-hbase4:45671] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-22 07:10:54,104 DEBUG [RS:1;jenkins-hbase4:45671] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-22 07:10:54,104 DEBUG [RS:1;jenkins-hbase4:45671] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-22 07:10:54,104 DEBUG [RS:1;jenkins-hbase4:45671] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-22 07:10:54,104 DEBUG [RS:1;jenkins-hbase4:45671] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-22 07:10:54,104 DEBUG [RS:1;jenkins-hbase4:45671] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-22 07:10:54,104 INFO [RS:2;jenkins-hbase4:41875] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-22 07:10:54,105 DEBUG [PEWorker-1] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-22 07:10:54,104 DEBUG [RS:0;jenkins-hbase4:33331] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-22 07:10:54,106 DEBUG [RS:0;jenkins-hbase4:33331] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-22 07:10:54,106 DEBUG [RS:0;jenkins-hbase4:33331] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-22 07:10:54,106 DEBUG [RS:0;jenkins-hbase4:33331] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-22 07:10:54,106 DEBUG [RS:0;jenkins-hbase4:33331] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-22 07:10:54,106 DEBUG [RS:0;jenkins-hbase4:33331] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-22 07:10:54,106 DEBUG [RS:0;jenkins-hbase4:33331] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-22 07:10:54,107 DEBUG [RS:0;jenkins-hbase4:33331] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-22 07:10:54,111 DEBUG [RS:2;jenkins-hbase4:41875] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-22 07:10:54,111 DEBUG [RS:2;jenkins-hbase4:41875] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-22 07:10:54,111 DEBUG [RS:2;jenkins-hbase4:41875] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-22 07:10:54,111 DEBUG [RS:2;jenkins-hbase4:41875] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-22 07:10:54,111 DEBUG [RS:2;jenkins-hbase4:41875] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-22 07:10:54,111 DEBUG [RS:2;jenkins-hbase4:41875] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-22 07:10:54,111 DEBUG [RS:2;jenkins-hbase4:41875] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-22 07:10:54,111 DEBUG [RS:2;jenkins-hbase4:41875] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-22 07:10:54,111 DEBUG [RS:2;jenkins-hbase4:41875] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-22 07:10:54,111 DEBUG [RS:2;jenkins-hbase4:41875] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-22 07:10:54,112 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-22 07:10:54,114 INFO [RS:2;jenkins-hbase4:41875] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-22 07:10:54,112 INFO [RS:0;jenkins-hbase4:33331] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-22 07:10:54,114 INFO [RS:1;jenkins-hbase4:45671] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-22 07:10:54,114 INFO [RS:0;jenkins-hbase4:33331] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-22 07:10:54,114 INFO [RS:1;jenkins-hbase4:45671] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-22 07:10:54,114 INFO [RS:2;jenkins-hbase4:41875] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-22 07:10:54,114 INFO [RS:1;jenkins-hbase4:45671] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-22 07:10:54,114 INFO [RS:2;jenkins-hbase4:41875] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-22 07:10:54,114 INFO [RS:1;jenkins-hbase4:45671] hbase.ChoreService(166): Chore ScheduledChore name=FileSystemUtilizationChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-22 07:10:54,114 INFO [RS:0;jenkins-hbase4:33331] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-22 07:10:54,115 INFO [RS:0;jenkins-hbase4:33331] hbase.ChoreService(166): Chore ScheduledChore name=FileSystemUtilizationChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-22 07:10:54,115 INFO [RS:2;jenkins-hbase4:41875] hbase.ChoreService(166): Chore ScheduledChore name=FileSystemUtilizationChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-22 07:10:54,116 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:45035/user/jenkins/test-data/563fc294-b7e0-2735-0ff8-3c56829b5c0e/data/hbase/meta/1588230740/info 2023-07-22 07:10:54,117 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-22 07:10:54,117 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-22 07:10:54,117 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-22 07:10:54,119 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:45035/user/jenkins/test-data/563fc294-b7e0-2735-0ff8-3c56829b5c0e/data/hbase/meta/1588230740/rep_barrier 2023-07-22 07:10:54,119 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-22 07:10:54,120 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-22 07:10:54,120 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-22 07:10:54,121 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:45035/user/jenkins/test-data/563fc294-b7e0-2735-0ff8-3c56829b5c0e/data/hbase/meta/1588230740/table 2023-07-22 07:10:54,122 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-22 07:10:54,126 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-22 07:10:54,131 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:45035/user/jenkins/test-data/563fc294-b7e0-2735-0ff8-3c56829b5c0e/data/hbase/meta/1588230740 2023-07-22 07:10:54,132 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:45035/user/jenkins/test-data/563fc294-b7e0-2735-0ff8-3c56829b5c0e/data/hbase/meta/1588230740 2023-07-22 07:10:54,135 DEBUG [PEWorker-1] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-22 07:10:54,136 DEBUG [PEWorker-1] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-22 07:10:54,137 INFO [RS:2;jenkins-hbase4:41875] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-22 07:10:54,137 INFO [RS:2;jenkins-hbase4:41875] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,41875,1690009853586-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-22 07:10:54,142 INFO [RS:1;jenkins-hbase4:45671] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-22 07:10:54,142 INFO [RS:1;jenkins-hbase4:45671] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,45671,1690009853385-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-22 07:10:54,142 INFO [RS:0;jenkins-hbase4:33331] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-22 07:10:54,143 INFO [RS:0;jenkins-hbase4:33331] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,33331,1690009853171-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-22 07:10:54,144 DEBUG [PEWorker-1] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:45035/user/jenkins/test-data/563fc294-b7e0-2735-0ff8-3c56829b5c0e/data/hbase/meta/1588230740/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-22 07:10:54,144 INFO [PEWorker-1] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9627026880, jitterRate=-0.10341325402259827}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-22 07:10:54,144 DEBUG [PEWorker-1] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-22 07:10:54,144 DEBUG [PEWorker-1] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-22 07:10:54,144 INFO [PEWorker-1] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-22 07:10:54,144 DEBUG [PEWorker-1] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-22 07:10:54,144 DEBUG [PEWorker-1] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-22 07:10:54,145 DEBUG [PEWorker-1] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-22 07:10:54,145 INFO [PEWorker-1] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-22 07:10:54,145 DEBUG [PEWorker-1] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-22 07:10:54,146 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_ASSIGN_META, locked=true; InitMetaProcedure table=hbase:meta 2023-07-22 07:10:54,146 INFO [PEWorker-1] procedure.InitMetaProcedure(103): Going to assign meta 2023-07-22 07:10:54,146 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN}] 2023-07-22 07:10:54,148 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN 2023-07-22 07:10:54,149 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN; state=OFFLINE, location=null; forceNewPlan=false, retain=false 2023-07-22 07:10:54,154 INFO [RS:2;jenkins-hbase4:41875] regionserver.Replication(203): jenkins-hbase4.apache.org,41875,1690009853586 started 2023-07-22 07:10:54,154 INFO [RS:2;jenkins-hbase4:41875] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,41875,1690009853586, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:41875, sessionid=0x1018bde5f1c0003 2023-07-22 07:10:54,155 DEBUG [RS:2;jenkins-hbase4:41875] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-22 07:10:54,155 DEBUG [RS:2;jenkins-hbase4:41875] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,41875,1690009853586 2023-07-22 07:10:54,155 DEBUG [RS:2;jenkins-hbase4:41875] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,41875,1690009853586' 2023-07-22 07:10:54,155 DEBUG [RS:2;jenkins-hbase4:41875] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-22 07:10:54,155 DEBUG [RS:2;jenkins-hbase4:41875] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-22 07:10:54,156 DEBUG [RS:2;jenkins-hbase4:41875] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-22 07:10:54,156 DEBUG [RS:2;jenkins-hbase4:41875] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-22 07:10:54,156 DEBUG [RS:2;jenkins-hbase4:41875] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,41875,1690009853586 2023-07-22 07:10:54,156 DEBUG [RS:2;jenkins-hbase4:41875] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,41875,1690009853586' 2023-07-22 07:10:54,156 DEBUG [RS:2;jenkins-hbase4:41875] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-22 07:10:54,156 INFO [RS:1;jenkins-hbase4:45671] regionserver.Replication(203): jenkins-hbase4.apache.org,45671,1690009853385 started 2023-07-22 07:10:54,156 INFO [RS:1;jenkins-hbase4:45671] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,45671,1690009853385, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:45671, sessionid=0x1018bde5f1c0002 2023-07-22 07:10:54,156 DEBUG [RS:1;jenkins-hbase4:45671] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-22 07:10:54,156 DEBUG [RS:1;jenkins-hbase4:45671] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,45671,1690009853385 2023-07-22 07:10:54,156 DEBUG [RS:1;jenkins-hbase4:45671] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,45671,1690009853385' 2023-07-22 07:10:54,156 DEBUG [RS:1;jenkins-hbase4:45671] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-22 07:10:54,156 DEBUG [RS:2;jenkins-hbase4:41875] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-22 07:10:54,157 DEBUG [RS:2;jenkins-hbase4:41875] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-22 07:10:54,157 DEBUG [RS:1;jenkins-hbase4:45671] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-22 07:10:54,157 INFO [RS:2;jenkins-hbase4:41875] quotas.RegionServerRpcQuotaManager(67): Initializing RPC quota support 2023-07-22 07:10:54,157 DEBUG [RS:1;jenkins-hbase4:45671] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-22 07:10:54,157 DEBUG [RS:1;jenkins-hbase4:45671] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-22 07:10:54,158 DEBUG [RS:1;jenkins-hbase4:45671] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,45671,1690009853385 2023-07-22 07:10:54,158 DEBUG [RS:1;jenkins-hbase4:45671] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,45671,1690009853385' 2023-07-22 07:10:54,158 DEBUG [RS:1;jenkins-hbase4:45671] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-22 07:10:54,158 INFO [RS:0;jenkins-hbase4:33331] regionserver.Replication(203): jenkins-hbase4.apache.org,33331,1690009853171 started 2023-07-22 07:10:54,158 INFO [RS:0;jenkins-hbase4:33331] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,33331,1690009853171, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:33331, sessionid=0x1018bde5f1c0001 2023-07-22 07:10:54,158 DEBUG [RS:1;jenkins-hbase4:45671] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-22 07:10:54,158 DEBUG [RS:0;jenkins-hbase4:33331] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-22 07:10:54,158 DEBUG [RS:0;jenkins-hbase4:33331] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,33331,1690009853171 2023-07-22 07:10:54,158 DEBUG [RS:0;jenkins-hbase4:33331] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,33331,1690009853171' 2023-07-22 07:10:54,158 DEBUG [RS:0;jenkins-hbase4:33331] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-22 07:10:54,159 DEBUG [RS:1;jenkins-hbase4:45671] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-22 07:10:54,159 INFO [RS:1;jenkins-hbase4:45671] quotas.RegionServerRpcQuotaManager(67): Initializing RPC quota support 2023-07-22 07:10:54,159 DEBUG [RS:0;jenkins-hbase4:33331] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-22 07:10:54,159 DEBUG [RS:0;jenkins-hbase4:33331] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-22 07:10:54,159 DEBUG [RS:0;jenkins-hbase4:33331] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-22 07:10:54,159 DEBUG [RS:0;jenkins-hbase4:33331] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,33331,1690009853171 2023-07-22 07:10:54,160 DEBUG [RS:0;jenkins-hbase4:33331] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,33331,1690009853171' 2023-07-22 07:10:54,160 DEBUG [RS:0;jenkins-hbase4:33331] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-22 07:10:54,160 INFO [RS:2;jenkins-hbase4:41875] hbase.ChoreService(166): Chore ScheduledChore name=QuotaRefresherChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-22 07:10:54,160 INFO [RS:1;jenkins-hbase4:45671] hbase.ChoreService(166): Chore ScheduledChore name=QuotaRefresherChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-22 07:10:54,160 DEBUG [RS:0;jenkins-hbase4:33331] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-22 07:10:54,160 DEBUG [RS:1;jenkins-hbase4:45671] zookeeper.ZKUtil(398): regionserver:45671-0x1018bde5f1c0002, quorum=127.0.0.1:56037, baseZNode=/hbase Unable to get data of znode /hbase/rpc-throttle because node does not exist (not an error) 2023-07-22 07:10:54,160 DEBUG [RS:0;jenkins-hbase4:33331] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-22 07:10:54,160 INFO [RS:1;jenkins-hbase4:45671] quotas.RegionServerRpcQuotaManager(73): Start rpc quota manager and rpc throttle enabled is true 2023-07-22 07:10:54,160 DEBUG [RS:2;jenkins-hbase4:41875] zookeeper.ZKUtil(398): regionserver:41875-0x1018bde5f1c0003, quorum=127.0.0.1:56037, baseZNode=/hbase Unable to get data of znode /hbase/rpc-throttle because node does not exist (not an error) 2023-07-22 07:10:54,160 INFO [RS:0;jenkins-hbase4:33331] quotas.RegionServerRpcQuotaManager(67): Initializing RPC quota support 2023-07-22 07:10:54,161 INFO [RS:2;jenkins-hbase4:41875] quotas.RegionServerRpcQuotaManager(73): Start rpc quota manager and rpc throttle enabled is true 2023-07-22 07:10:54,161 INFO [RS:0;jenkins-hbase4:33331] hbase.ChoreService(166): Chore ScheduledChore name=QuotaRefresherChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-22 07:10:54,161 INFO [RS:2;jenkins-hbase4:41875] hbase.ChoreService(166): Chore ScheduledChore name=SpaceQuotaRefresherChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-22 07:10:54,161 DEBUG [RS:0;jenkins-hbase4:33331] zookeeper.ZKUtil(398): regionserver:33331-0x1018bde5f1c0001, quorum=127.0.0.1:56037, baseZNode=/hbase Unable to get data of znode /hbase/rpc-throttle because node does not exist (not an error) 2023-07-22 07:10:54,161 INFO [RS:0;jenkins-hbase4:33331] quotas.RegionServerRpcQuotaManager(73): Start rpc quota manager and rpc throttle enabled is true 2023-07-22 07:10:54,161 INFO [RS:1;jenkins-hbase4:45671] hbase.ChoreService(166): Chore ScheduledChore name=SpaceQuotaRefresherChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-22 07:10:54,161 INFO [RS:0;jenkins-hbase4:33331] hbase.ChoreService(166): Chore ScheduledChore name=SpaceQuotaRefresherChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-22 07:10:54,161 INFO [RS:1;jenkins-hbase4:45671] hbase.ChoreService(166): Chore ScheduledChore name=RegionSizeReportingChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-22 07:10:54,161 INFO [RS:0;jenkins-hbase4:33331] hbase.ChoreService(166): Chore ScheduledChore name=RegionSizeReportingChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-22 07:10:54,161 INFO [RS:2;jenkins-hbase4:41875] hbase.ChoreService(166): Chore ScheduledChore name=RegionSizeReportingChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-22 07:10:54,266 INFO [RS:0;jenkins-hbase4:33331] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C33331%2C1690009853171, suffix=, logDir=hdfs://localhost:45035/user/jenkins/test-data/563fc294-b7e0-2735-0ff8-3c56829b5c0e/WALs/jenkins-hbase4.apache.org,33331,1690009853171, archiveDir=hdfs://localhost:45035/user/jenkins/test-data/563fc294-b7e0-2735-0ff8-3c56829b5c0e/oldWALs, maxLogs=32 2023-07-22 07:10:54,267 INFO [RS:1;jenkins-hbase4:45671] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C45671%2C1690009853385, suffix=, logDir=hdfs://localhost:45035/user/jenkins/test-data/563fc294-b7e0-2735-0ff8-3c56829b5c0e/WALs/jenkins-hbase4.apache.org,45671,1690009853385, archiveDir=hdfs://localhost:45035/user/jenkins/test-data/563fc294-b7e0-2735-0ff8-3c56829b5c0e/oldWALs, maxLogs=32 2023-07-22 07:10:54,267 INFO [RS:2;jenkins-hbase4:41875] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C41875%2C1690009853586, suffix=, logDir=hdfs://localhost:45035/user/jenkins/test-data/563fc294-b7e0-2735-0ff8-3c56829b5c0e/WALs/jenkins-hbase4.apache.org,41875,1690009853586, archiveDir=hdfs://localhost:45035/user/jenkins/test-data/563fc294-b7e0-2735-0ff8-3c56829b5c0e/oldWALs, maxLogs=32 2023-07-22 07:10:54,291 DEBUG [RS-EventLoopGroup-11-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:46223,DS-d15232c5-7700-4629-b936-7e8939fa5ac2,DISK] 2023-07-22 07:10:54,299 DEBUG [RS-EventLoopGroup-11-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:38717,DS-1963c017-acbd-4f2e-9a11-b62fcb87037a,DISK] 2023-07-22 07:10:54,300 DEBUG [jenkins-hbase4:34769] assignment.AssignmentManager(2176): Processing assignQueue; systemServersCount=3, allServersCount=3 2023-07-22 07:10:54,300 DEBUG [jenkins-hbase4:34769] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-22 07:10:54,300 DEBUG [jenkins-hbase4:34769] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-22 07:10:54,300 DEBUG [jenkins-hbase4:34769] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-22 07:10:54,300 DEBUG [jenkins-hbase4:34769] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-22 07:10:54,300 DEBUG [jenkins-hbase4:34769] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-22 07:10:54,303 DEBUG [RS-EventLoopGroup-11-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:46223,DS-d15232c5-7700-4629-b936-7e8939fa5ac2,DISK] 2023-07-22 07:10:54,303 INFO [PEWorker-3] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,45671,1690009853385, state=OPENING 2023-07-22 07:10:54,303 DEBUG [RS-EventLoopGroup-11-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:46223,DS-d15232c5-7700-4629-b936-7e8939fa5ac2,DISK] 2023-07-22 07:10:54,303 DEBUG [RS-EventLoopGroup-11-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:42667,DS-77b63c19-a1de-460a-b32e-ac496b2ef823,DISK] 2023-07-22 07:10:54,304 DEBUG [RS-EventLoopGroup-11-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:42667,DS-77b63c19-a1de-460a-b32e-ac496b2ef823,DISK] 2023-07-22 07:10:54,304 DEBUG [RS-EventLoopGroup-11-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:42667,DS-77b63c19-a1de-460a-b32e-ac496b2ef823,DISK] 2023-07-22 07:10:54,304 DEBUG [RS-EventLoopGroup-11-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:38717,DS-1963c017-acbd-4f2e-9a11-b62fcb87037a,DISK] 2023-07-22 07:10:54,305 DEBUG [RS-EventLoopGroup-11-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:38717,DS-1963c017-acbd-4f2e-9a11-b62fcb87037a,DISK] 2023-07-22 07:10:54,305 DEBUG [PEWorker-3] zookeeper.MetaTableLocator(240): hbase:meta region location doesn't exist, create it 2023-07-22 07:10:54,307 INFO [RS:1;jenkins-hbase4:45671] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/563fc294-b7e0-2735-0ff8-3c56829b5c0e/WALs/jenkins-hbase4.apache.org,45671,1690009853385/jenkins-hbase4.apache.org%2C45671%2C1690009853385.1690009854271 2023-07-22 07:10:54,307 DEBUG [Listener at localhost/37479-EventThread] zookeeper.ZKWatcher(600): master:34769-0x1018bde5f1c0000, quorum=127.0.0.1:56037, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-22 07:10:54,308 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=3, ppid=2, state=RUNNABLE; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,45671,1690009853385}] 2023-07-22 07:10:54,308 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-22 07:10:54,312 DEBUG [RS:1;jenkins-hbase4:45671] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:46223,DS-d15232c5-7700-4629-b936-7e8939fa5ac2,DISK], DatanodeInfoWithStorage[127.0.0.1:42667,DS-77b63c19-a1de-460a-b32e-ac496b2ef823,DISK], DatanodeInfoWithStorage[127.0.0.1:38717,DS-1963c017-acbd-4f2e-9a11-b62fcb87037a,DISK]] 2023-07-22 07:10:54,312 INFO [RS:0;jenkins-hbase4:33331] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/563fc294-b7e0-2735-0ff8-3c56829b5c0e/WALs/jenkins-hbase4.apache.org,33331,1690009853171/jenkins-hbase4.apache.org%2C33331%2C1690009853171.1690009854270 2023-07-22 07:10:54,312 INFO [RS:2;jenkins-hbase4:41875] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/563fc294-b7e0-2735-0ff8-3c56829b5c0e/WALs/jenkins-hbase4.apache.org,41875,1690009853586/jenkins-hbase4.apache.org%2C41875%2C1690009853586.1690009854272 2023-07-22 07:10:54,312 DEBUG [RS:0;jenkins-hbase4:33331] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:38717,DS-1963c017-acbd-4f2e-9a11-b62fcb87037a,DISK], DatanodeInfoWithStorage[127.0.0.1:42667,DS-77b63c19-a1de-460a-b32e-ac496b2ef823,DISK], DatanodeInfoWithStorage[127.0.0.1:46223,DS-d15232c5-7700-4629-b936-7e8939fa5ac2,DISK]] 2023-07-22 07:10:54,313 DEBUG [RS:2;jenkins-hbase4:41875] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:42667,DS-77b63c19-a1de-460a-b32e-ac496b2ef823,DISK], DatanodeInfoWithStorage[127.0.0.1:38717,DS-1963c017-acbd-4f2e-9a11-b62fcb87037a,DISK], DatanodeInfoWithStorage[127.0.0.1:46223,DS-d15232c5-7700-4629-b936-7e8939fa5ac2,DISK]] 2023-07-22 07:10:54,467 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,45671,1690009853385 2023-07-22 07:10:54,467 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-22 07:10:54,468 INFO [RS-EventLoopGroup-10-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:50448, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-22 07:10:54,473 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:meta,,1.1588230740 2023-07-22 07:10:54,473 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-22 07:10:54,475 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C45671%2C1690009853385.meta, suffix=.meta, logDir=hdfs://localhost:45035/user/jenkins/test-data/563fc294-b7e0-2735-0ff8-3c56829b5c0e/WALs/jenkins-hbase4.apache.org,45671,1690009853385, archiveDir=hdfs://localhost:45035/user/jenkins/test-data/563fc294-b7e0-2735-0ff8-3c56829b5c0e/oldWALs, maxLogs=32 2023-07-22 07:10:54,492 DEBUG [RS-EventLoopGroup-11-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:38717,DS-1963c017-acbd-4f2e-9a11-b62fcb87037a,DISK] 2023-07-22 07:10:54,493 DEBUG [RS-EventLoopGroup-11-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:42667,DS-77b63c19-a1de-460a-b32e-ac496b2ef823,DISK] 2023-07-22 07:10:54,493 DEBUG [RS-EventLoopGroup-11-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:46223,DS-d15232c5-7700-4629-b936-7e8939fa5ac2,DISK] 2023-07-22 07:10:54,503 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/563fc294-b7e0-2735-0ff8-3c56829b5c0e/WALs/jenkins-hbase4.apache.org,45671,1690009853385/jenkins-hbase4.apache.org%2C45671%2C1690009853385.meta.1690009854476.meta 2023-07-22 07:10:54,503 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:38717,DS-1963c017-acbd-4f2e-9a11-b62fcb87037a,DISK], DatanodeInfoWithStorage[127.0.0.1:46223,DS-d15232c5-7700-4629-b936-7e8939fa5ac2,DISK], DatanodeInfoWithStorage[127.0.0.1:42667,DS-77b63c19-a1de-460a-b32e-ac496b2ef823,DISK]] 2023-07-22 07:10:54,503 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2023-07-22 07:10:54,504 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-22 07:10:54,504 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService 2023-07-22 07:10:54,504 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2023-07-22 07:10:54,504 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table meta 1588230740 2023-07-22 07:10:54,504 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-22 07:10:54,504 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 1588230740 2023-07-22 07:10:54,504 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 1588230740 2023-07-22 07:10:54,507 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-22 07:10:54,508 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:45035/user/jenkins/test-data/563fc294-b7e0-2735-0ff8-3c56829b5c0e/data/hbase/meta/1588230740/info 2023-07-22 07:10:54,508 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:45035/user/jenkins/test-data/563fc294-b7e0-2735-0ff8-3c56829b5c0e/data/hbase/meta/1588230740/info 2023-07-22 07:10:54,508 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-22 07:10:54,509 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-22 07:10:54,509 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-22 07:10:54,510 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:45035/user/jenkins/test-data/563fc294-b7e0-2735-0ff8-3c56829b5c0e/data/hbase/meta/1588230740/rep_barrier 2023-07-22 07:10:54,510 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:45035/user/jenkins/test-data/563fc294-b7e0-2735-0ff8-3c56829b5c0e/data/hbase/meta/1588230740/rep_barrier 2023-07-22 07:10:54,510 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-22 07:10:54,511 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-22 07:10:54,511 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-22 07:10:54,512 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:45035/user/jenkins/test-data/563fc294-b7e0-2735-0ff8-3c56829b5c0e/data/hbase/meta/1588230740/table 2023-07-22 07:10:54,512 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:45035/user/jenkins/test-data/563fc294-b7e0-2735-0ff8-3c56829b5c0e/data/hbase/meta/1588230740/table 2023-07-22 07:10:54,512 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-22 07:10:54,512 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-22 07:10:54,513 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:45035/user/jenkins/test-data/563fc294-b7e0-2735-0ff8-3c56829b5c0e/data/hbase/meta/1588230740 2023-07-22 07:10:54,514 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:45035/user/jenkins/test-data/563fc294-b7e0-2735-0ff8-3c56829b5c0e/data/hbase/meta/1588230740 2023-07-22 07:10:54,517 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-22 07:10:54,518 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-22 07:10:54,520 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9416692960, jitterRate=-0.12300212681293488}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-22 07:10:54,520 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-22 07:10:54,521 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:meta,,1.1588230740, pid=3, masterSystemTime=1690009854467 2023-07-22 07:10:54,526 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:meta,,1.1588230740 2023-07-22 07:10:54,527 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:meta,,1.1588230740 2023-07-22 07:10:54,528 INFO [PEWorker-5] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,45671,1690009853385, state=OPEN 2023-07-22 07:10:54,529 DEBUG [Listener at localhost/37479-EventThread] zookeeper.ZKWatcher(600): master:34769-0x1018bde5f1c0000, quorum=127.0.0.1:56037, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-07-22 07:10:54,529 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-22 07:10:54,531 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=3, resume processing ppid=2 2023-07-22 07:10:54,531 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=3, ppid=2, state=SUCCESS; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,45671,1690009853385 in 221 msec 2023-07-22 07:10:54,532 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=2, resume processing ppid=1 2023-07-22 07:10:54,532 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=2, ppid=1, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN in 385 msec 2023-07-22 07:10:54,533 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=1, state=SUCCESS; InitMetaProcedure table=hbase:meta in 582 msec 2023-07-22 07:10:54,533 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(953): Wait for region servers to report in: status=null, state=RUNNING, startTime=1690009854533, completionTime=-1 2023-07-22 07:10:54,534 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ServerManager(821): Finished waiting on RegionServer count=3; waited=0ms, expected min=3 server(s), max=3 server(s), master is running 2023-07-22 07:10:54,534 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1517): Joining cluster... 2023-07-22 07:10:54,537 DEBUG [hconnection-0x50711acc-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-22 07:10:54,538 INFO [RS-EventLoopGroup-10-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:50452, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-22 07:10:54,540 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1529): Number of RegionServers=3 2023-07-22 07:10:54,540 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$RegionInTransitionChore; timeout=60000, timestamp=1690009914540 2023-07-22 07:10:54,540 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$DeadServerMetricRegionChore; timeout=120000, timestamp=1690009974540 2023-07-22 07:10:54,540 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1536): Joined the cluster in 6 msec 2023-07-22 07:10:54,549 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,34769,1690009852974-ClusterStatusChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-22 07:10:54,549 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,34769,1690009852974-BalancerChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-22 07:10:54,549 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,34769,1690009852974-RegionNormalizerChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-22 07:10:54,549 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=CatalogJanitor-jenkins-hbase4:34769, period=300000, unit=MILLISECONDS is enabled. 2023-07-22 07:10:54,549 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HbckChore-, period=3600000, unit=MILLISECONDS is enabled. 2023-07-22 07:10:54,549 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.TableNamespaceManager(92): Namespace table not found. Creating... 2023-07-22 07:10:54,550 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2148): Client=null/null create 'hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-07-22 07:10:54,551 DEBUG [master/jenkins-hbase4:0.Chore.1] janitor.CatalogJanitor(175): 2023-07-22 07:10:54,551 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:namespace 2023-07-22 07:10:54,552 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_PRE_OPERATION 2023-07-22 07:10:54,553 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-22 07:10:54,554 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:45035/user/jenkins/test-data/563fc294-b7e0-2735-0ff8-3c56829b5c0e/.tmp/data/hbase/namespace/4993ed070e736ec9d6fc2c02caacf0a4 2023-07-22 07:10:54,555 DEBUG [HFileArchiver-2] backup.HFileArchiver(153): Directory hdfs://localhost:45035/user/jenkins/test-data/563fc294-b7e0-2735-0ff8-3c56829b5c0e/.tmp/data/hbase/namespace/4993ed070e736ec9d6fc2c02caacf0a4 empty. 2023-07-22 07:10:54,555 DEBUG [HFileArchiver-2] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:45035/user/jenkins/test-data/563fc294-b7e0-2735-0ff8-3c56829b5c0e/.tmp/data/hbase/namespace/4993ed070e736ec9d6fc2c02caacf0a4 2023-07-22 07:10:54,555 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived hbase:namespace regions 2023-07-22 07:10:54,569 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,34769,1690009852974] master.HMaster(2148): Client=null/null create 'hbase:rsgroup', {TABLE_ATTRIBUTES => {coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|', METADATA => {'SPLIT_POLICY' => 'org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy'}}}, {NAME => 'm', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-22 07:10:54,577 DEBUG [PEWorker-3] util.FSTableDescriptors(570): Wrote into hdfs://localhost:45035/user/jenkins/test-data/563fc294-b7e0-2735-0ff8-3c56829b5c0e/.tmp/data/hbase/namespace/.tabledesc/.tableinfo.0000000001 2023-07-22 07:10:54,578 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(7675): creating {ENCODED => 4993ed070e736ec9d6fc2c02caacf0a4, NAME => 'hbase:namespace,,1690009854549.4993ed070e736ec9d6fc2c02caacf0a4.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:45035/user/jenkins/test-data/563fc294-b7e0-2735-0ff8-3c56829b5c0e/.tmp 2023-07-22 07:10:54,582 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,34769,1690009852974] procedure2.ProcedureExecutor(1029): Stored pid=5, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:rsgroup 2023-07-22 07:10:54,588 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_PRE_OPERATION 2023-07-22 07:10:54,589 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-22 07:10:54,591 DEBUG [HFileArchiver-6] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:45035/user/jenkins/test-data/563fc294-b7e0-2735-0ff8-3c56829b5c0e/.tmp/data/hbase/rsgroup/be1d70bd233ac7904719ee079f09c06e 2023-07-22 07:10:54,592 DEBUG [HFileArchiver-6] backup.HFileArchiver(153): Directory hdfs://localhost:45035/user/jenkins/test-data/563fc294-b7e0-2735-0ff8-3c56829b5c0e/.tmp/data/hbase/rsgroup/be1d70bd233ac7904719ee079f09c06e empty. 2023-07-22 07:10:54,597 DEBUG [HFileArchiver-6] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:45035/user/jenkins/test-data/563fc294-b7e0-2735-0ff8-3c56829b5c0e/.tmp/data/hbase/rsgroup/be1d70bd233ac7904719ee079f09c06e 2023-07-22 07:10:54,597 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(328): Archived hbase:rsgroup regions 2023-07-22 07:10:54,603 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1690009854549.4993ed070e736ec9d6fc2c02caacf0a4.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-22 07:10:54,603 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1604): Closing 4993ed070e736ec9d6fc2c02caacf0a4, disabling compactions & flushes 2023-07-22 07:10:54,603 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1690009854549.4993ed070e736ec9d6fc2c02caacf0a4. 2023-07-22 07:10:54,603 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1690009854549.4993ed070e736ec9d6fc2c02caacf0a4. 2023-07-22 07:10:54,603 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1690009854549.4993ed070e736ec9d6fc2c02caacf0a4. after waiting 0 ms 2023-07-22 07:10:54,603 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1690009854549.4993ed070e736ec9d6fc2c02caacf0a4. 2023-07-22 07:10:54,603 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1838): Closed hbase:namespace,,1690009854549.4993ed070e736ec9d6fc2c02caacf0a4. 2023-07-22 07:10:54,603 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1558): Region close journal for 4993ed070e736ec9d6fc2c02caacf0a4: 2023-07-22 07:10:54,606 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ADD_TO_META 2023-07-22 07:10:54,607 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:namespace,,1690009854549.4993ed070e736ec9d6fc2c02caacf0a4.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1690009854606"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690009854606"}]},"ts":"1690009854606"} 2023-07-22 07:10:54,609 INFO [PEWorker-3] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-22 07:10:54,610 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-22 07:10:54,610 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690009854610"}]},"ts":"1690009854610"} 2023-07-22 07:10:54,611 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLING in hbase:meta 2023-07-22 07:10:54,611 DEBUG [PEWorker-2] util.FSTableDescriptors(570): Wrote into hdfs://localhost:45035/user/jenkins/test-data/563fc294-b7e0-2735-0ff8-3c56829b5c0e/.tmp/data/hbase/rsgroup/.tabledesc/.tableinfo.0000000001 2023-07-22 07:10:54,612 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(7675): creating {ENCODED => be1d70bd233ac7904719ee079f09c06e, NAME => 'hbase:rsgroup,,1690009854569.be1d70bd233ac7904719ee079f09c06e.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:rsgroup', {TABLE_ATTRIBUTES => {coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|', METADATA => {'SPLIT_POLICY' => 'org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy'}}}, {NAME => 'm', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:45035/user/jenkins/test-data/563fc294-b7e0-2735-0ff8-3c56829b5c0e/.tmp 2023-07-22 07:10:54,616 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-22 07:10:54,616 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-22 07:10:54,616 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-22 07:10:54,616 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-22 07:10:54,616 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-22 07:10:54,616 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=6, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=4993ed070e736ec9d6fc2c02caacf0a4, ASSIGN}] 2023-07-22 07:10:54,620 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=6, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=4993ed070e736ec9d6fc2c02caacf0a4, ASSIGN 2023-07-22 07:10:54,620 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=6, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:namespace, region=4993ed070e736ec9d6fc2c02caacf0a4, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,41875,1690009853586; forceNewPlan=false, retain=false 2023-07-22 07:10:54,625 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1690009854569.be1d70bd233ac7904719ee079f09c06e.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-22 07:10:54,625 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1604): Closing be1d70bd233ac7904719ee079f09c06e, disabling compactions & flushes 2023-07-22 07:10:54,625 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1690009854569.be1d70bd233ac7904719ee079f09c06e. 2023-07-22 07:10:54,625 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1690009854569.be1d70bd233ac7904719ee079f09c06e. 2023-07-22 07:10:54,625 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1690009854569.be1d70bd233ac7904719ee079f09c06e. after waiting 0 ms 2023-07-22 07:10:54,625 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1690009854569.be1d70bd233ac7904719ee079f09c06e. 2023-07-22 07:10:54,625 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1690009854569.be1d70bd233ac7904719ee079f09c06e. 2023-07-22 07:10:54,625 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1558): Region close journal for be1d70bd233ac7904719ee079f09c06e: 2023-07-22 07:10:54,627 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_ADD_TO_META 2023-07-22 07:10:54,627 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:rsgroup,,1690009854569.be1d70bd233ac7904719ee079f09c06e.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1690009854627"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690009854627"}]},"ts":"1690009854627"} 2023-07-22 07:10:54,628 INFO [PEWorker-2] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-22 07:10:54,629 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-22 07:10:54,629 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:rsgroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690009854629"}]},"ts":"1690009854629"} 2023-07-22 07:10:54,630 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=hbase:rsgroup, state=ENABLING in hbase:meta 2023-07-22 07:10:54,633 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-22 07:10:54,633 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-22 07:10:54,633 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-22 07:10:54,633 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-22 07:10:54,633 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-22 07:10:54,633 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=7, ppid=5, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:rsgroup, region=be1d70bd233ac7904719ee079f09c06e, ASSIGN}] 2023-07-22 07:10:54,635 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=7, ppid=5, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:rsgroup, region=be1d70bd233ac7904719ee079f09c06e, ASSIGN 2023-07-22 07:10:54,635 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=7, ppid=5, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:rsgroup, region=be1d70bd233ac7904719ee079f09c06e, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,45671,1690009853385; forceNewPlan=false, retain=false 2023-07-22 07:10:54,635 INFO [jenkins-hbase4:34769] balancer.BaseLoadBalancer(1545): Reassigned 2 regions. 2 retained the pre-restart assignment. 2023-07-22 07:10:54,637 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=6 updating hbase:meta row=4993ed070e736ec9d6fc2c02caacf0a4, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,41875,1690009853586 2023-07-22 07:10:54,637 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1690009854549.4993ed070e736ec9d6fc2c02caacf0a4.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1690009854637"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690009854637"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690009854637"}]},"ts":"1690009854637"} 2023-07-22 07:10:54,637 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=7 updating hbase:meta row=be1d70bd233ac7904719ee079f09c06e, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,45671,1690009853385 2023-07-22 07:10:54,637 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:rsgroup,,1690009854569.be1d70bd233ac7904719ee079f09c06e.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1690009854637"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690009854637"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690009854637"}]},"ts":"1690009854637"} 2023-07-22 07:10:54,638 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=8, ppid=6, state=RUNNABLE; OpenRegionProcedure 4993ed070e736ec9d6fc2c02caacf0a4, server=jenkins-hbase4.apache.org,41875,1690009853586}] 2023-07-22 07:10:54,641 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=9, ppid=7, state=RUNNABLE; OpenRegionProcedure be1d70bd233ac7904719ee079f09c06e, server=jenkins-hbase4.apache.org,45671,1690009853385}] 2023-07-22 07:10:54,793 DEBUG [RSProcedureDispatcher-pool-1] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,41875,1690009853586 2023-07-22 07:10:54,793 DEBUG [RSProcedureDispatcher-pool-1] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-22 07:10:54,795 INFO [RS-EventLoopGroup-11-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:34472, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-22 07:10:54,797 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:rsgroup,,1690009854569.be1d70bd233ac7904719ee079f09c06e. 2023-07-22 07:10:54,797 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => be1d70bd233ac7904719ee079f09c06e, NAME => 'hbase:rsgroup,,1690009854569.be1d70bd233ac7904719ee079f09c06e.', STARTKEY => '', ENDKEY => ''} 2023-07-22 07:10:54,798 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-22 07:10:54,798 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:rsgroup,,1690009854569.be1d70bd233ac7904719ee079f09c06e. service=MultiRowMutationService 2023-07-22 07:10:54,798 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:rsgroup successfully. 2023-07-22 07:10:54,798 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table rsgroup be1d70bd233ac7904719ee079f09c06e 2023-07-22 07:10:54,798 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1690009854569.be1d70bd233ac7904719ee079f09c06e.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-22 07:10:54,798 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for be1d70bd233ac7904719ee079f09c06e 2023-07-22 07:10:54,798 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for be1d70bd233ac7904719ee079f09c06e 2023-07-22 07:10:54,802 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:namespace,,1690009854549.4993ed070e736ec9d6fc2c02caacf0a4. 2023-07-22 07:10:54,802 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 4993ed070e736ec9d6fc2c02caacf0a4, NAME => 'hbase:namespace,,1690009854549.4993ed070e736ec9d6fc2c02caacf0a4.', STARTKEY => '', ENDKEY => ''} 2023-07-22 07:10:54,802 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table namespace 4993ed070e736ec9d6fc2c02caacf0a4 2023-07-22 07:10:54,802 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1690009854549.4993ed070e736ec9d6fc2c02caacf0a4.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-22 07:10:54,802 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 4993ed070e736ec9d6fc2c02caacf0a4 2023-07-22 07:10:54,802 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 4993ed070e736ec9d6fc2c02caacf0a4 2023-07-22 07:10:54,802 INFO [StoreOpener-be1d70bd233ac7904719ee079f09c06e-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family m of region be1d70bd233ac7904719ee079f09c06e 2023-07-22 07:10:54,804 DEBUG [StoreOpener-be1d70bd233ac7904719ee079f09c06e-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:45035/user/jenkins/test-data/563fc294-b7e0-2735-0ff8-3c56829b5c0e/data/hbase/rsgroup/be1d70bd233ac7904719ee079f09c06e/m 2023-07-22 07:10:54,804 DEBUG [StoreOpener-be1d70bd233ac7904719ee079f09c06e-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:45035/user/jenkins/test-data/563fc294-b7e0-2735-0ff8-3c56829b5c0e/data/hbase/rsgroup/be1d70bd233ac7904719ee079f09c06e/m 2023-07-22 07:10:54,805 INFO [StoreOpener-be1d70bd233ac7904719ee079f09c06e-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region be1d70bd233ac7904719ee079f09c06e columnFamilyName m 2023-07-22 07:10:54,805 INFO [StoreOpener-be1d70bd233ac7904719ee079f09c06e-1] regionserver.HStore(310): Store=be1d70bd233ac7904719ee079f09c06e/m, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-22 07:10:54,806 INFO [StoreOpener-4993ed070e736ec9d6fc2c02caacf0a4-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 4993ed070e736ec9d6fc2c02caacf0a4 2023-07-22 07:10:54,806 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:45035/user/jenkins/test-data/563fc294-b7e0-2735-0ff8-3c56829b5c0e/data/hbase/rsgroup/be1d70bd233ac7904719ee079f09c06e 2023-07-22 07:10:54,807 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:45035/user/jenkins/test-data/563fc294-b7e0-2735-0ff8-3c56829b5c0e/data/hbase/rsgroup/be1d70bd233ac7904719ee079f09c06e 2023-07-22 07:10:54,807 DEBUG [StoreOpener-4993ed070e736ec9d6fc2c02caacf0a4-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:45035/user/jenkins/test-data/563fc294-b7e0-2735-0ff8-3c56829b5c0e/data/hbase/namespace/4993ed070e736ec9d6fc2c02caacf0a4/info 2023-07-22 07:10:54,808 DEBUG [StoreOpener-4993ed070e736ec9d6fc2c02caacf0a4-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:45035/user/jenkins/test-data/563fc294-b7e0-2735-0ff8-3c56829b5c0e/data/hbase/namespace/4993ed070e736ec9d6fc2c02caacf0a4/info 2023-07-22 07:10:54,808 INFO [StoreOpener-4993ed070e736ec9d6fc2c02caacf0a4-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 4993ed070e736ec9d6fc2c02caacf0a4 columnFamilyName info 2023-07-22 07:10:54,810 INFO [StoreOpener-4993ed070e736ec9d6fc2c02caacf0a4-1] regionserver.HStore(310): Store=4993ed070e736ec9d6fc2c02caacf0a4/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-22 07:10:54,811 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:45035/user/jenkins/test-data/563fc294-b7e0-2735-0ff8-3c56829b5c0e/data/hbase/namespace/4993ed070e736ec9d6fc2c02caacf0a4 2023-07-22 07:10:54,811 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:45035/user/jenkins/test-data/563fc294-b7e0-2735-0ff8-3c56829b5c0e/data/hbase/namespace/4993ed070e736ec9d6fc2c02caacf0a4 2023-07-22 07:10:54,813 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for be1d70bd233ac7904719ee079f09c06e 2023-07-22 07:10:54,814 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 4993ed070e736ec9d6fc2c02caacf0a4 2023-07-22 07:10:54,819 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:45035/user/jenkins/test-data/563fc294-b7e0-2735-0ff8-3c56829b5c0e/data/hbase/rsgroup/be1d70bd233ac7904719ee079f09c06e/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-22 07:10:54,820 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:45035/user/jenkins/test-data/563fc294-b7e0-2735-0ff8-3c56829b5c0e/data/hbase/namespace/4993ed070e736ec9d6fc2c02caacf0a4/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-22 07:10:54,820 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened be1d70bd233ac7904719ee079f09c06e; next sequenceid=2; org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy@73cb7e4e, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-22 07:10:54,820 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for be1d70bd233ac7904719ee079f09c06e: 2023-07-22 07:10:54,821 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 4993ed070e736ec9d6fc2c02caacf0a4; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9420445760, jitterRate=-0.12265262007713318}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-22 07:10:54,821 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 4993ed070e736ec9d6fc2c02caacf0a4: 2023-07-22 07:10:54,821 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:rsgroup,,1690009854569.be1d70bd233ac7904719ee079f09c06e., pid=9, masterSystemTime=1690009854793 2023-07-22 07:10:54,821 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:namespace,,1690009854549.4993ed070e736ec9d6fc2c02caacf0a4., pid=8, masterSystemTime=1690009854793 2023-07-22 07:10:54,824 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:rsgroup,,1690009854569.be1d70bd233ac7904719ee079f09c06e. 2023-07-22 07:10:54,824 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:rsgroup,,1690009854569.be1d70bd233ac7904719ee079f09c06e. 2023-07-22 07:10:54,825 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=7 updating hbase:meta row=be1d70bd233ac7904719ee079f09c06e, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,45671,1690009853385 2023-07-22 07:10:54,825 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:namespace,,1690009854549.4993ed070e736ec9d6fc2c02caacf0a4. 2023-07-22 07:10:54,825 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:rsgroup,,1690009854569.be1d70bd233ac7904719ee079f09c06e.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1690009854825"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690009854825"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690009854825"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690009854825"}]},"ts":"1690009854825"} 2023-07-22 07:10:54,825 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:namespace,,1690009854549.4993ed070e736ec9d6fc2c02caacf0a4. 2023-07-22 07:10:54,827 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=6 updating hbase:meta row=4993ed070e736ec9d6fc2c02caacf0a4, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,41875,1690009853586 2023-07-22 07:10:54,827 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:namespace,,1690009854549.4993ed070e736ec9d6fc2c02caacf0a4.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1690009854827"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690009854827"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690009854827"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690009854827"}]},"ts":"1690009854827"} 2023-07-22 07:10:54,832 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=9, resume processing ppid=7 2023-07-22 07:10:54,832 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=9, ppid=7, state=SUCCESS; OpenRegionProcedure be1d70bd233ac7904719ee079f09c06e, server=jenkins-hbase4.apache.org,45671,1690009853385 in 191 msec 2023-07-22 07:10:54,833 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=8, resume processing ppid=6 2023-07-22 07:10:54,833 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=8, ppid=6, state=SUCCESS; OpenRegionProcedure 4993ed070e736ec9d6fc2c02caacf0a4, server=jenkins-hbase4.apache.org,41875,1690009853586 in 193 msec 2023-07-22 07:10:54,834 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=7, resume processing ppid=5 2023-07-22 07:10:54,834 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=7, ppid=5, state=SUCCESS; TransitRegionStateProcedure table=hbase:rsgroup, region=be1d70bd233ac7904719ee079f09c06e, ASSIGN in 199 msec 2023-07-22 07:10:54,835 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-22 07:10:54,835 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:rsgroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690009854835"}]},"ts":"1690009854835"} 2023-07-22 07:10:54,835 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=6, resume processing ppid=4 2023-07-22 07:10:54,835 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=6, ppid=4, state=SUCCESS; TransitRegionStateProcedure table=hbase:namespace, region=4993ed070e736ec9d6fc2c02caacf0a4, ASSIGN in 217 msec 2023-07-22 07:10:54,837 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-22 07:10:54,837 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=hbase:rsgroup, state=ENABLED in hbase:meta 2023-07-22 07:10:54,837 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690009854837"}]},"ts":"1690009854837"} 2023-07-22 07:10:54,838 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLED in hbase:meta 2023-07-22 07:10:54,839 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_POST_OPERATION 2023-07-22 07:10:54,840 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_POST_OPERATION 2023-07-22 07:10:54,840 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=5, state=SUCCESS; CreateTableProcedure table=hbase:rsgroup in 270 msec 2023-07-22 07:10:54,842 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=4, state=SUCCESS; CreateTableProcedure table=hbase:namespace in 290 msec 2023-07-22 07:10:54,852 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:34769-0x1018bde5f1c0000, quorum=127.0.0.1:56037, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/namespace 2023-07-22 07:10:54,853 DEBUG [Listener at localhost/37479-EventThread] zookeeper.ZKWatcher(600): master:34769-0x1018bde5f1c0000, quorum=127.0.0.1:56037, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/namespace 2023-07-22 07:10:54,853 DEBUG [Listener at localhost/37479-EventThread] zookeeper.ZKWatcher(600): master:34769-0x1018bde5f1c0000, quorum=127.0.0.1:56037, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-22 07:10:54,856 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-22 07:10:54,857 INFO [RS-EventLoopGroup-11-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:34484, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-22 07:10:54,861 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=10, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=default 2023-07-22 07:10:54,869 DEBUG [Listener at localhost/37479-EventThread] zookeeper.ZKWatcher(600): master:34769-0x1018bde5f1c0000, quorum=127.0.0.1:56037, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-22 07:10:54,872 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=10, state=SUCCESS; CreateNamespaceProcedure, namespace=default in 10 msec 2023-07-22 07:10:54,883 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=11, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=hbase 2023-07-22 07:10:54,888 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,34769,1690009852974] rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker(840): RSGroup table=hbase:rsgroup is online, refreshing cached information 2023-07-22 07:10:54,888 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,34769,1690009852974] rsgroup.RSGroupInfoManagerImpl(534): Refreshing in Online mode. 2023-07-22 07:10:54,893 DEBUG [Listener at localhost/37479-EventThread] zookeeper.ZKWatcher(600): master:34769-0x1018bde5f1c0000, quorum=127.0.0.1:56037, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-22 07:10:54,898 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=11, state=SUCCESS; CreateNamespaceProcedure, namespace=hbase in 13 msec 2023-07-22 07:10:54,899 DEBUG [Listener at localhost/37479-EventThread] zookeeper.ZKWatcher(600): master:34769-0x1018bde5f1c0000, quorum=127.0.0.1:56037, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-22 07:10:54,899 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,34769,1690009852974] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-22 07:10:54,901 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,34769,1690009852974] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-22 07:10:54,902 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,34769,1690009852974] rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker(823): GroupBasedLoadBalancer is now online 2023-07-22 07:10:54,907 DEBUG [Listener at localhost/37479-EventThread] zookeeper.ZKWatcher(600): master:34769-0x1018bde5f1c0000, quorum=127.0.0.1:56037, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/default 2023-07-22 07:10:54,910 DEBUG [Listener at localhost/37479-EventThread] zookeeper.ZKWatcher(600): master:34769-0x1018bde5f1c0000, quorum=127.0.0.1:56037, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/hbase 2023-07-22 07:10:54,910 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1083): Master has completed initialization 1.130sec 2023-07-22 07:10:54,911 INFO [master/jenkins-hbase4:0:becomeActiveMaster] quotas.MasterQuotaManager(103): Quota table not found. Creating... 2023-07-22 07:10:54,911 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2148): Client=null/null create 'hbase:quota', {NAME => 'q', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'u', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-22 07:10:54,912 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=12, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:quota 2023-07-22 07:10:54,912 INFO [master/jenkins-hbase4:0:becomeActiveMaster] quotas.MasterQuotaManager(107): Initializing quota support 2023-07-22 07:10:54,914 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:quota execute state=CREATE_TABLE_PRE_OPERATION 2023-07-22 07:10:54,915 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:quota execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-22 07:10:54,916 INFO [master/jenkins-hbase4:0:becomeActiveMaster] namespace.NamespaceStateManager(59): Namespace State Manager started. 2023-07-22 07:10:54,917 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:45035/user/jenkins/test-data/563fc294-b7e0-2735-0ff8-3c56829b5c0e/.tmp/data/hbase/quota/b4f294dd788c6b0c6ff4b3a331724b82 2023-07-22 07:10:54,917 DEBUG [HFileArchiver-3] backup.HFileArchiver(153): Directory hdfs://localhost:45035/user/jenkins/test-data/563fc294-b7e0-2735-0ff8-3c56829b5c0e/.tmp/data/hbase/quota/b4f294dd788c6b0c6ff4b3a331724b82 empty. 2023-07-22 07:10:54,918 DEBUG [HFileArchiver-3] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:45035/user/jenkins/test-data/563fc294-b7e0-2735-0ff8-3c56829b5c0e/.tmp/data/hbase/quota/b4f294dd788c6b0c6ff4b3a331724b82 2023-07-22 07:10:54,918 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(328): Archived hbase:quota regions 2023-07-22 07:10:54,922 INFO [master/jenkins-hbase4:0:becomeActiveMaster] namespace.NamespaceStateManager(222): Finished updating state of 2 namespaces. 2023-07-22 07:10:54,922 INFO [master/jenkins-hbase4:0:becomeActiveMaster] namespace.NamespaceAuditor(50): NamespaceAuditor started. 2023-07-22 07:10:54,925 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=QuotaObserverChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-22 07:10:54,926 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=QuotaObserverChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-22 07:10:54,926 INFO [master/jenkins-hbase4:0:becomeActiveMaster] slowlog.SlowLogMasterService(57): Slow/Large requests logging to system table hbase:slowlog is disabled. Quitting. 2023-07-22 07:10:54,926 INFO [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKWatcher(269): not a secure deployment, proceeding 2023-07-22 07:10:54,926 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,34769,1690009852974-ExpiredMobFileCleanerChore, period=86400, unit=SECONDS is enabled. 2023-07-22 07:10:54,926 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,34769,1690009852974-MobCompactionChore, period=604800, unit=SECONDS is enabled. 2023-07-22 07:10:54,929 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1175): Balancer post startup initialization complete, took 0 seconds 2023-07-22 07:10:54,940 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost:45035/user/jenkins/test-data/563fc294-b7e0-2735-0ff8-3c56829b5c0e/.tmp/data/hbase/quota/.tabledesc/.tableinfo.0000000001 2023-07-22 07:10:54,941 INFO [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(7675): creating {ENCODED => b4f294dd788c6b0c6ff4b3a331724b82, NAME => 'hbase:quota,,1690009854911.b4f294dd788c6b0c6ff4b3a331724b82.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:quota', {NAME => 'q', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'u', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:45035/user/jenkins/test-data/563fc294-b7e0-2735-0ff8-3c56829b5c0e/.tmp 2023-07-22 07:10:54,950 DEBUG [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(866): Instantiated hbase:quota,,1690009854911.b4f294dd788c6b0c6ff4b3a331724b82.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-22 07:10:54,951 DEBUG [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1604): Closing b4f294dd788c6b0c6ff4b3a331724b82, disabling compactions & flushes 2023-07-22 07:10:54,951 INFO [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1626): Closing region hbase:quota,,1690009854911.b4f294dd788c6b0c6ff4b3a331724b82. 2023-07-22 07:10:54,951 DEBUG [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:quota,,1690009854911.b4f294dd788c6b0c6ff4b3a331724b82. 2023-07-22 07:10:54,951 DEBUG [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:quota,,1690009854911.b4f294dd788c6b0c6ff4b3a331724b82. after waiting 0 ms 2023-07-22 07:10:54,951 DEBUG [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:quota,,1690009854911.b4f294dd788c6b0c6ff4b3a331724b82. 2023-07-22 07:10:54,951 INFO [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1838): Closed hbase:quota,,1690009854911.b4f294dd788c6b0c6ff4b3a331724b82. 2023-07-22 07:10:54,951 DEBUG [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1558): Region close journal for b4f294dd788c6b0c6ff4b3a331724b82: 2023-07-22 07:10:54,953 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:quota execute state=CREATE_TABLE_ADD_TO_META 2023-07-22 07:10:54,954 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:quota,,1690009854911.b4f294dd788c6b0c6ff4b3a331724b82.","families":{"info":[{"qualifier":"regioninfo","vlen":37,"tag":[],"timestamp":"1690009854954"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690009854954"}]},"ts":"1690009854954"} 2023-07-22 07:10:54,955 INFO [PEWorker-1] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-22 07:10:54,956 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:quota execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-22 07:10:54,956 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:quota","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690009854956"}]},"ts":"1690009854956"} 2023-07-22 07:10:54,958 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=hbase:quota, state=ENABLING in hbase:meta 2023-07-22 07:10:54,960 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-22 07:10:54,960 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-22 07:10:54,960 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-22 07:10:54,961 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-22 07:10:54,961 DEBUG [Listener at localhost/37479] zookeeper.ReadOnlyZKClient(139): Connect 0x305d778f to 127.0.0.1:56037 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-22 07:10:54,961 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-22 07:10:54,961 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=13, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:quota, region=b4f294dd788c6b0c6ff4b3a331724b82, ASSIGN}] 2023-07-22 07:10:54,963 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=13, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:quota, region=b4f294dd788c6b0c6ff4b3a331724b82, ASSIGN 2023-07-22 07:10:54,965 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=13, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:quota, region=b4f294dd788c6b0c6ff4b3a331724b82, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,41875,1690009853586; forceNewPlan=false, retain=false 2023-07-22 07:10:54,967 DEBUG [Listener at localhost/37479] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@fd48745, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-22 07:10:54,968 DEBUG [hconnection-0x570ece30-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-22 07:10:54,970 INFO [RS-EventLoopGroup-10-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:50454, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-22 07:10:54,972 INFO [Listener at localhost/37479] hbase.HBaseTestingUtility(1145): Minicluster is up; activeMaster=jenkins-hbase4.apache.org,34769,1690009852974 2023-07-22 07:10:54,972 INFO [Listener at localhost/37479] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-22 07:10:54,974 DEBUG [Listener at localhost/37479] ipc.RpcConnection(124): Using SIMPLE authentication for service=MasterService, sasl=false 2023-07-22 07:10:54,976 INFO [RS-EventLoopGroup-8-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:34936, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=MasterService 2023-07-22 07:10:54,979 DEBUG [Listener at localhost/37479-EventThread] zookeeper.ZKWatcher(600): master:34769-0x1018bde5f1c0000, quorum=127.0.0.1:56037, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/balancer 2023-07-22 07:10:54,979 DEBUG [Listener at localhost/37479-EventThread] zookeeper.ZKWatcher(600): master:34769-0x1018bde5f1c0000, quorum=127.0.0.1:56037, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-22 07:10:54,980 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34769] master.MasterRpcServices(492): Client=jenkins//172.31.14.131 set balanceSwitch=false 2023-07-22 07:10:54,981 DEBUG [Listener at localhost/37479] zookeeper.ReadOnlyZKClient(139): Connect 0x04f4a10d to 127.0.0.1:56037 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-22 07:10:54,985 DEBUG [Listener at localhost/37479] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@772353c4, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-22 07:10:54,985 INFO [Listener at localhost/37479] zookeeper.RecoverableZooKeeper(93): Process identifier=VerifyingRSGroupAdminClient connecting to ZooKeeper ensemble=127.0.0.1:56037 2023-07-22 07:10:54,987 DEBUG [Listener at localhost/37479-EventThread] zookeeper.ZKWatcher(600): VerifyingRSGroupAdminClient0x0, quorum=127.0.0.1:56037, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-22 07:10:54,988 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): VerifyingRSGroupAdminClient-0x1018bde5f1c000a connected 2023-07-22 07:10:54,992 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34769] master.HMaster$15(3014): Client=jenkins//172.31.14.131 creating {NAME => 'np1', hbase.namespace.quota.maxregions => '5', hbase.namespace.quota.maxtables => '2'} 2023-07-22 07:10:54,993 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34769] procedure2.ProcedureExecutor(1029): Stored pid=14, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=np1 2023-07-22 07:10:54,999 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34769] master.MasterRpcServices(1230): Checking to see if procedure is done pid=14 2023-07-22 07:10:55,005 DEBUG [Listener at localhost/37479-EventThread] zookeeper.ZKWatcher(600): master:34769-0x1018bde5f1c0000, quorum=127.0.0.1:56037, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-22 07:10:55,007 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=14, state=SUCCESS; CreateNamespaceProcedure, namespace=np1 in 14 msec 2023-07-22 07:10:55,100 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34769] master.MasterRpcServices(1230): Checking to see if procedure is done pid=14 2023-07-22 07:10:55,106 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34769] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'np1:table1', {NAME => 'fam1', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-22 07:10:55,108 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34769] procedure2.ProcedureExecutor(1029): Stored pid=15, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=np1:table1 2023-07-22 07:10:55,110 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=15, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=np1:table1 execute state=CREATE_TABLE_PRE_OPERATION 2023-07-22 07:10:55,111 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34769] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "np1" qualifier: "table1" procId is: 15 2023-07-22 07:10:55,112 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34769] master.MasterRpcServices(1230): Checking to see if procedure is done pid=15 2023-07-22 07:10:55,113 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-22 07:10:55,114 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-22 07:10:55,115 INFO [jenkins-hbase4:34769] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-22 07:10:55,117 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=15, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=np1:table1 execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-22 07:10:55,117 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=13 updating hbase:meta row=b4f294dd788c6b0c6ff4b3a331724b82, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,41875,1690009853586 2023-07-22 07:10:55,117 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:quota,,1690009854911.b4f294dd788c6b0c6ff4b3a331724b82.","families":{"info":[{"qualifier":"regioninfo","vlen":37,"tag":[],"timestamp":"1690009855117"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690009855117"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690009855117"}]},"ts":"1690009855117"} 2023-07-22 07:10:55,119 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:45035/user/jenkins/test-data/563fc294-b7e0-2735-0ff8-3c56829b5c0e/.tmp/data/np1/table1/782f10edf2656749139885c85c29f2c1 2023-07-22 07:10:55,120 DEBUG [HFileArchiver-4] backup.HFileArchiver(153): Directory hdfs://localhost:45035/user/jenkins/test-data/563fc294-b7e0-2735-0ff8-3c56829b5c0e/.tmp/data/np1/table1/782f10edf2656749139885c85c29f2c1 empty. 2023-07-22 07:10:55,121 DEBUG [HFileArchiver-4] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:45035/user/jenkins/test-data/563fc294-b7e0-2735-0ff8-3c56829b5c0e/.tmp/data/np1/table1/782f10edf2656749139885c85c29f2c1 2023-07-22 07:10:55,121 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(328): Archived np1:table1 regions 2023-07-22 07:10:55,121 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=16, ppid=13, state=RUNNABLE; OpenRegionProcedure b4f294dd788c6b0c6ff4b3a331724b82, server=jenkins-hbase4.apache.org,41875,1690009853586}] 2023-07-22 07:10:55,143 DEBUG [PEWorker-4] util.FSTableDescriptors(570): Wrote into hdfs://localhost:45035/user/jenkins/test-data/563fc294-b7e0-2735-0ff8-3c56829b5c0e/.tmp/data/np1/table1/.tabledesc/.tableinfo.0000000001 2023-07-22 07:10:55,145 INFO [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(7675): creating {ENCODED => 782f10edf2656749139885c85c29f2c1, NAME => 'np1:table1,,1690009855105.782f10edf2656749139885c85c29f2c1.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='np1:table1', {NAME => 'fam1', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:45035/user/jenkins/test-data/563fc294-b7e0-2735-0ff8-3c56829b5c0e/.tmp 2023-07-22 07:10:55,213 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34769] master.MasterRpcServices(1230): Checking to see if procedure is done pid=15 2023-07-22 07:10:55,239 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-datanode.properties,hadoop-metrics2.properties 2023-07-22 07:10:55,279 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:quota,,1690009854911.b4f294dd788c6b0c6ff4b3a331724b82. 2023-07-22 07:10:55,281 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => b4f294dd788c6b0c6ff4b3a331724b82, NAME => 'hbase:quota,,1690009854911.b4f294dd788c6b0c6ff4b3a331724b82.', STARTKEY => '', ENDKEY => ''} 2023-07-22 07:10:55,281 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table quota b4f294dd788c6b0c6ff4b3a331724b82 2023-07-22 07:10:55,281 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:quota,,1690009854911.b4f294dd788c6b0c6ff4b3a331724b82.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-22 07:10:55,281 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for b4f294dd788c6b0c6ff4b3a331724b82 2023-07-22 07:10:55,281 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for b4f294dd788c6b0c6ff4b3a331724b82 2023-07-22 07:10:55,285 INFO [StoreOpener-b4f294dd788c6b0c6ff4b3a331724b82-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family q of region b4f294dd788c6b0c6ff4b3a331724b82 2023-07-22 07:10:55,288 DEBUG [StoreOpener-b4f294dd788c6b0c6ff4b3a331724b82-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:45035/user/jenkins/test-data/563fc294-b7e0-2735-0ff8-3c56829b5c0e/data/hbase/quota/b4f294dd788c6b0c6ff4b3a331724b82/q 2023-07-22 07:10:55,288 DEBUG [StoreOpener-b4f294dd788c6b0c6ff4b3a331724b82-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:45035/user/jenkins/test-data/563fc294-b7e0-2735-0ff8-3c56829b5c0e/data/hbase/quota/b4f294dd788c6b0c6ff4b3a331724b82/q 2023-07-22 07:10:55,288 INFO [StoreOpener-b4f294dd788c6b0c6ff4b3a331724b82-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region b4f294dd788c6b0c6ff4b3a331724b82 columnFamilyName q 2023-07-22 07:10:55,289 INFO [StoreOpener-b4f294dd788c6b0c6ff4b3a331724b82-1] regionserver.HStore(310): Store=b4f294dd788c6b0c6ff4b3a331724b82/q, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-22 07:10:55,289 INFO [StoreOpener-b4f294dd788c6b0c6ff4b3a331724b82-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family u of region b4f294dd788c6b0c6ff4b3a331724b82 2023-07-22 07:10:55,292 DEBUG [StoreOpener-b4f294dd788c6b0c6ff4b3a331724b82-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:45035/user/jenkins/test-data/563fc294-b7e0-2735-0ff8-3c56829b5c0e/data/hbase/quota/b4f294dd788c6b0c6ff4b3a331724b82/u 2023-07-22 07:10:55,292 DEBUG [StoreOpener-b4f294dd788c6b0c6ff4b3a331724b82-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:45035/user/jenkins/test-data/563fc294-b7e0-2735-0ff8-3c56829b5c0e/data/hbase/quota/b4f294dd788c6b0c6ff4b3a331724b82/u 2023-07-22 07:10:55,295 INFO [StoreOpener-b4f294dd788c6b0c6ff4b3a331724b82-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region b4f294dd788c6b0c6ff4b3a331724b82 columnFamilyName u 2023-07-22 07:10:55,296 INFO [StoreOpener-b4f294dd788c6b0c6ff4b3a331724b82-1] regionserver.HStore(310): Store=b4f294dd788c6b0c6ff4b3a331724b82/u, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-22 07:10:55,297 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:45035/user/jenkins/test-data/563fc294-b7e0-2735-0ff8-3c56829b5c0e/data/hbase/quota/b4f294dd788c6b0c6ff4b3a331724b82 2023-07-22 07:10:55,298 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:45035/user/jenkins/test-data/563fc294-b7e0-2735-0ff8-3c56829b5c0e/data/hbase/quota/b4f294dd788c6b0c6ff4b3a331724b82 2023-07-22 07:10:55,300 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:quota descriptor;using region.getMemStoreFlushHeapSize/# of families (64.0 M)) instead. 2023-07-22 07:10:55,302 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for b4f294dd788c6b0c6ff4b3a331724b82 2023-07-22 07:10:55,305 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:45035/user/jenkins/test-data/563fc294-b7e0-2735-0ff8-3c56829b5c0e/data/hbase/quota/b4f294dd788c6b0c6ff4b3a331724b82/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-22 07:10:55,306 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened b4f294dd788c6b0c6ff4b3a331724b82; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9682588480, jitterRate=-0.09823867678642273}}}, FlushLargeStoresPolicy{flushSizeLowerBound=67108864} 2023-07-22 07:10:55,306 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for b4f294dd788c6b0c6ff4b3a331724b82: 2023-07-22 07:10:55,307 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:quota,,1690009854911.b4f294dd788c6b0c6ff4b3a331724b82., pid=16, masterSystemTime=1690009855274 2023-07-22 07:10:55,309 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:quota,,1690009854911.b4f294dd788c6b0c6ff4b3a331724b82. 2023-07-22 07:10:55,309 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:quota,,1690009854911.b4f294dd788c6b0c6ff4b3a331724b82. 2023-07-22 07:10:55,309 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=13 updating hbase:meta row=b4f294dd788c6b0c6ff4b3a331724b82, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,41875,1690009853586 2023-07-22 07:10:55,309 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:quota,,1690009854911.b4f294dd788c6b0c6ff4b3a331724b82.","families":{"info":[{"qualifier":"regioninfo","vlen":37,"tag":[],"timestamp":"1690009855309"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690009855309"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690009855309"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690009855309"}]},"ts":"1690009855309"} 2023-07-22 07:10:55,312 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=16, resume processing ppid=13 2023-07-22 07:10:55,312 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=16, ppid=13, state=SUCCESS; OpenRegionProcedure b4f294dd788c6b0c6ff4b3a331724b82, server=jenkins-hbase4.apache.org,41875,1690009853586 in 189 msec 2023-07-22 07:10:55,314 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=13, resume processing ppid=12 2023-07-22 07:10:55,314 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=13, ppid=12, state=SUCCESS; TransitRegionStateProcedure table=hbase:quota, region=b4f294dd788c6b0c6ff4b3a331724b82, ASSIGN in 351 msec 2023-07-22 07:10:55,314 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:quota execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-22 07:10:55,315 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:quota","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690009855315"}]},"ts":"1690009855315"} 2023-07-22 07:10:55,316 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=hbase:quota, state=ENABLED in hbase:meta 2023-07-22 07:10:55,318 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:quota execute state=CREATE_TABLE_POST_OPERATION 2023-07-22 07:10:55,319 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=12, state=SUCCESS; CreateTableProcedure table=hbase:quota in 407 msec 2023-07-22 07:10:55,414 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34769] master.MasterRpcServices(1230): Checking to see if procedure is done pid=15 2023-07-22 07:10:55,558 DEBUG [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(866): Instantiated np1:table1,,1690009855105.782f10edf2656749139885c85c29f2c1.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-22 07:10:55,558 DEBUG [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(1604): Closing 782f10edf2656749139885c85c29f2c1, disabling compactions & flushes 2023-07-22 07:10:55,558 INFO [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(1626): Closing region np1:table1,,1690009855105.782f10edf2656749139885c85c29f2c1. 2023-07-22 07:10:55,558 DEBUG [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on np1:table1,,1690009855105.782f10edf2656749139885c85c29f2c1. 2023-07-22 07:10:55,558 DEBUG [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(1714): Acquired close lock on np1:table1,,1690009855105.782f10edf2656749139885c85c29f2c1. after waiting 0 ms 2023-07-22 07:10:55,558 DEBUG [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(1724): Updates disabled for region np1:table1,,1690009855105.782f10edf2656749139885c85c29f2c1. 2023-07-22 07:10:55,558 INFO [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(1838): Closed np1:table1,,1690009855105.782f10edf2656749139885c85c29f2c1. 2023-07-22 07:10:55,558 DEBUG [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(1558): Region close journal for 782f10edf2656749139885c85c29f2c1: 2023-07-22 07:10:55,561 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=15, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=np1:table1 execute state=CREATE_TABLE_ADD_TO_META 2023-07-22 07:10:55,563 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"np1:table1,,1690009855105.782f10edf2656749139885c85c29f2c1.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1690009855562"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690009855562"}]},"ts":"1690009855562"} 2023-07-22 07:10:55,566 INFO [PEWorker-4] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-22 07:10:55,566 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=15, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=np1:table1 execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-22 07:10:55,567 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"np1:table1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690009855567"}]},"ts":"1690009855567"} 2023-07-22 07:10:55,571 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=np1:table1, state=ENABLING in hbase:meta 2023-07-22 07:10:55,574 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-22 07:10:55,574 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-22 07:10:55,574 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-22 07:10:55,574 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-22 07:10:55,574 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-22 07:10:55,574 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=17, ppid=15, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=np1:table1, region=782f10edf2656749139885c85c29f2c1, ASSIGN}] 2023-07-22 07:10:55,575 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=17, ppid=15, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=np1:table1, region=782f10edf2656749139885c85c29f2c1, ASSIGN 2023-07-22 07:10:55,575 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=17, ppid=15, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=np1:table1, region=782f10edf2656749139885c85c29f2c1, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,33331,1690009853171; forceNewPlan=false, retain=false 2023-07-22 07:10:55,715 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34769] master.MasterRpcServices(1230): Checking to see if procedure is done pid=15 2023-07-22 07:10:55,726 INFO [jenkins-hbase4:34769] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-22 07:10:55,727 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=17 updating hbase:meta row=782f10edf2656749139885c85c29f2c1, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,33331,1690009853171 2023-07-22 07:10:55,727 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"np1:table1,,1690009855105.782f10edf2656749139885c85c29f2c1.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1690009855727"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690009855727"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690009855727"}]},"ts":"1690009855727"} 2023-07-22 07:10:55,728 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=18, ppid=17, state=RUNNABLE; OpenRegionProcedure 782f10edf2656749139885c85c29f2c1, server=jenkins-hbase4.apache.org,33331,1690009853171}] 2023-07-22 07:10:55,880 DEBUG [RSProcedureDispatcher-pool-2] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,33331,1690009853171 2023-07-22 07:10:55,880 DEBUG [RSProcedureDispatcher-pool-2] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-22 07:10:55,882 INFO [RS-EventLoopGroup-9-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:55126, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-22 07:10:55,886 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open np1:table1,,1690009855105.782f10edf2656749139885c85c29f2c1. 2023-07-22 07:10:55,886 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 782f10edf2656749139885c85c29f2c1, NAME => 'np1:table1,,1690009855105.782f10edf2656749139885c85c29f2c1.', STARTKEY => '', ENDKEY => ''} 2023-07-22 07:10:55,887 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table table1 782f10edf2656749139885c85c29f2c1 2023-07-22 07:10:55,887 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated np1:table1,,1690009855105.782f10edf2656749139885c85c29f2c1.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-22 07:10:55,887 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 782f10edf2656749139885c85c29f2c1 2023-07-22 07:10:55,887 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 782f10edf2656749139885c85c29f2c1 2023-07-22 07:10:55,888 INFO [StoreOpener-782f10edf2656749139885c85c29f2c1-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family fam1 of region 782f10edf2656749139885c85c29f2c1 2023-07-22 07:10:55,889 DEBUG [StoreOpener-782f10edf2656749139885c85c29f2c1-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:45035/user/jenkins/test-data/563fc294-b7e0-2735-0ff8-3c56829b5c0e/data/np1/table1/782f10edf2656749139885c85c29f2c1/fam1 2023-07-22 07:10:55,889 DEBUG [StoreOpener-782f10edf2656749139885c85c29f2c1-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:45035/user/jenkins/test-data/563fc294-b7e0-2735-0ff8-3c56829b5c0e/data/np1/table1/782f10edf2656749139885c85c29f2c1/fam1 2023-07-22 07:10:55,889 INFO [StoreOpener-782f10edf2656749139885c85c29f2c1-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 782f10edf2656749139885c85c29f2c1 columnFamilyName fam1 2023-07-22 07:10:55,890 INFO [StoreOpener-782f10edf2656749139885c85c29f2c1-1] regionserver.HStore(310): Store=782f10edf2656749139885c85c29f2c1/fam1, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-22 07:10:55,891 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:45035/user/jenkins/test-data/563fc294-b7e0-2735-0ff8-3c56829b5c0e/data/np1/table1/782f10edf2656749139885c85c29f2c1 2023-07-22 07:10:55,891 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:45035/user/jenkins/test-data/563fc294-b7e0-2735-0ff8-3c56829b5c0e/data/np1/table1/782f10edf2656749139885c85c29f2c1 2023-07-22 07:10:55,894 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 782f10edf2656749139885c85c29f2c1 2023-07-22 07:10:55,896 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:45035/user/jenkins/test-data/563fc294-b7e0-2735-0ff8-3c56829b5c0e/data/np1/table1/782f10edf2656749139885c85c29f2c1/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-22 07:10:55,897 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 782f10edf2656749139885c85c29f2c1; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11081897600, jitterRate=0.03208214044570923}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-22 07:10:55,897 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 782f10edf2656749139885c85c29f2c1: 2023-07-22 07:10:55,897 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for np1:table1,,1690009855105.782f10edf2656749139885c85c29f2c1., pid=18, masterSystemTime=1690009855880 2023-07-22 07:10:55,900 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for np1:table1,,1690009855105.782f10edf2656749139885c85c29f2c1. 2023-07-22 07:10:55,901 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened np1:table1,,1690009855105.782f10edf2656749139885c85c29f2c1. 2023-07-22 07:10:55,901 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=17 updating hbase:meta row=782f10edf2656749139885c85c29f2c1, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,33331,1690009853171 2023-07-22 07:10:55,901 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"np1:table1,,1690009855105.782f10edf2656749139885c85c29f2c1.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1690009855901"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690009855901"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690009855901"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690009855901"}]},"ts":"1690009855901"} 2023-07-22 07:10:55,904 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=18, resume processing ppid=17 2023-07-22 07:10:55,904 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=18, ppid=17, state=SUCCESS; OpenRegionProcedure 782f10edf2656749139885c85c29f2c1, server=jenkins-hbase4.apache.org,33331,1690009853171 in 174 msec 2023-07-22 07:10:55,905 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=17, resume processing ppid=15 2023-07-22 07:10:55,905 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=17, ppid=15, state=SUCCESS; TransitRegionStateProcedure table=np1:table1, region=782f10edf2656749139885c85c29f2c1, ASSIGN in 330 msec 2023-07-22 07:10:55,906 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=15, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=np1:table1 execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-22 07:10:55,906 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"np1:table1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690009855906"}]},"ts":"1690009855906"} 2023-07-22 07:10:55,907 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=np1:table1, state=ENABLED in hbase:meta 2023-07-22 07:10:55,910 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=15, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=np1:table1 execute state=CREATE_TABLE_POST_OPERATION 2023-07-22 07:10:55,911 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=15, state=SUCCESS; CreateTableProcedure table=np1:table1 in 804 msec 2023-07-22 07:10:56,216 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34769] master.MasterRpcServices(1230): Checking to see if procedure is done pid=15 2023-07-22 07:10:56,217 INFO [Listener at localhost/37479] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: np1:table1, procId: 15 completed 2023-07-22 07:10:56,218 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34769] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'np1:table2', {NAME => 'fam1', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-22 07:10:56,219 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34769] procedure2.ProcedureExecutor(1029): Stored pid=19, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=np1:table2 2023-07-22 07:10:56,221 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=19, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=np1:table2 execute state=CREATE_TABLE_PRE_OPERATION 2023-07-22 07:10:56,222 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34769] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "np1" qualifier: "table2" procId is: 19 2023-07-22 07:10:56,223 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34769] master.MasterRpcServices(1230): Checking to see if procedure is done pid=19 2023-07-22 07:10:56,242 INFO [PEWorker-2] procedure2.ProcedureExecutor(1528): Rolled back pid=19, state=ROLLEDBACK, exception=org.apache.hadoop.hbase.quotas.QuotaExceededException via master-create-table:org.apache.hadoop.hbase.quotas.QuotaExceededException: The table np1:table2 is not allowed to have 6 regions. The total number of regions permitted is only 5, while current region count is 1. This may be transient, please retry later if there are any ongoing split operations in the namespace.; CreateTableProcedure table=np1:table2 exec-time=23 msec 2023-07-22 07:10:56,324 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34769] master.MasterRpcServices(1230): Checking to see if procedure is done pid=19 2023-07-22 07:10:56,326 INFO [Listener at localhost/37479] client.HBaseAdmin$TableFuture(3548): Operation: CREATE, Table Name: np1:table2, procId: 19 failed with The table np1:table2 is not allowed to have 6 regions. The total number of regions permitted is only 5, while current region count is 1. This may be transient, please retry later if there are any ongoing split operations in the namespace. 2023-07-22 07:10:56,327 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34769] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-22 07:10:56,328 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34769] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-22 07:10:56,329 INFO [Listener at localhost/37479] client.HBaseAdmin$15(890): Started disable of np1:table1 2023-07-22 07:10:56,329 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34769] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable np1:table1 2023-07-22 07:10:56,330 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34769] procedure2.ProcedureExecutor(1029): Stored pid=20, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=np1:table1 2023-07-22 07:10:56,332 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34769] master.MasterRpcServices(1230): Checking to see if procedure is done pid=20 2023-07-22 07:10:56,333 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"np1:table1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690009856332"}]},"ts":"1690009856332"} 2023-07-22 07:10:56,334 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=np1:table1, state=DISABLING in hbase:meta 2023-07-22 07:10:56,335 INFO [PEWorker-5] procedure.DisableTableProcedure(293): Set np1:table1 to state=DISABLING 2023-07-22 07:10:56,335 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=21, ppid=20, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=np1:table1, region=782f10edf2656749139885c85c29f2c1, UNASSIGN}] 2023-07-22 07:10:56,336 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=21, ppid=20, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=np1:table1, region=782f10edf2656749139885c85c29f2c1, UNASSIGN 2023-07-22 07:10:56,337 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=21 updating hbase:meta row=782f10edf2656749139885c85c29f2c1, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,33331,1690009853171 2023-07-22 07:10:56,337 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"np1:table1,,1690009855105.782f10edf2656749139885c85c29f2c1.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1690009856337"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690009856337"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690009856337"}]},"ts":"1690009856337"} 2023-07-22 07:10:56,338 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=22, ppid=21, state=RUNNABLE; CloseRegionProcedure 782f10edf2656749139885c85c29f2c1, server=jenkins-hbase4.apache.org,33331,1690009853171}] 2023-07-22 07:10:56,433 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34769] master.MasterRpcServices(1230): Checking to see if procedure is done pid=20 2023-07-22 07:10:56,490 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 782f10edf2656749139885c85c29f2c1 2023-07-22 07:10:56,491 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 782f10edf2656749139885c85c29f2c1, disabling compactions & flushes 2023-07-22 07:10:56,491 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region np1:table1,,1690009855105.782f10edf2656749139885c85c29f2c1. 2023-07-22 07:10:56,491 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on np1:table1,,1690009855105.782f10edf2656749139885c85c29f2c1. 2023-07-22 07:10:56,491 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on np1:table1,,1690009855105.782f10edf2656749139885c85c29f2c1. after waiting 0 ms 2023-07-22 07:10:56,491 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region np1:table1,,1690009855105.782f10edf2656749139885c85c29f2c1. 2023-07-22 07:10:56,494 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:45035/user/jenkins/test-data/563fc294-b7e0-2735-0ff8-3c56829b5c0e/data/np1/table1/782f10edf2656749139885c85c29f2c1/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-22 07:10:56,495 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed np1:table1,,1690009855105.782f10edf2656749139885c85c29f2c1. 2023-07-22 07:10:56,495 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 782f10edf2656749139885c85c29f2c1: 2023-07-22 07:10:56,496 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 782f10edf2656749139885c85c29f2c1 2023-07-22 07:10:56,497 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=21 updating hbase:meta row=782f10edf2656749139885c85c29f2c1, regionState=CLOSED 2023-07-22 07:10:56,497 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"np1:table1,,1690009855105.782f10edf2656749139885c85c29f2c1.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1690009856497"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690009856497"}]},"ts":"1690009856497"} 2023-07-22 07:10:56,500 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=22, resume processing ppid=21 2023-07-22 07:10:56,500 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=22, ppid=21, state=SUCCESS; CloseRegionProcedure 782f10edf2656749139885c85c29f2c1, server=jenkins-hbase4.apache.org,33331,1690009853171 in 160 msec 2023-07-22 07:10:56,501 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=21, resume processing ppid=20 2023-07-22 07:10:56,501 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=21, ppid=20, state=SUCCESS; TransitRegionStateProcedure table=np1:table1, region=782f10edf2656749139885c85c29f2c1, UNASSIGN in 165 msec 2023-07-22 07:10:56,502 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"np1:table1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690009856502"}]},"ts":"1690009856502"} 2023-07-22 07:10:56,503 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=np1:table1, state=DISABLED in hbase:meta 2023-07-22 07:10:56,504 INFO [PEWorker-5] procedure.DisableTableProcedure(305): Set np1:table1 to state=DISABLED 2023-07-22 07:10:56,506 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=20, state=SUCCESS; DisableTableProcedure table=np1:table1 in 175 msec 2023-07-22 07:10:56,634 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34769] master.MasterRpcServices(1230): Checking to see if procedure is done pid=20 2023-07-22 07:10:56,634 INFO [Listener at localhost/37479] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: np1:table1, procId: 20 completed 2023-07-22 07:10:56,635 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34769] master.HMaster$5(2228): Client=jenkins//172.31.14.131 delete np1:table1 2023-07-22 07:10:56,636 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34769] procedure2.ProcedureExecutor(1029): Stored pid=23, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=np1:table1 2023-07-22 07:10:56,638 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=23, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=np1:table1 2023-07-22 07:10:56,638 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34769] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 'np1:table1' from rsgroup 'default' 2023-07-22 07:10:56,639 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=23, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=np1:table1 2023-07-22 07:10:56,640 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34769] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-22 07:10:56,640 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34769] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-22 07:10:56,643 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:45035/user/jenkins/test-data/563fc294-b7e0-2735-0ff8-3c56829b5c0e/.tmp/data/np1/table1/782f10edf2656749139885c85c29f2c1 2023-07-22 07:10:56,644 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34769] master.MasterRpcServices(1230): Checking to see if procedure is done pid=23 2023-07-22 07:10:56,645 DEBUG [HFileArchiver-1] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:45035/user/jenkins/test-data/563fc294-b7e0-2735-0ff8-3c56829b5c0e/.tmp/data/np1/table1/782f10edf2656749139885c85c29f2c1/fam1, FileablePath, hdfs://localhost:45035/user/jenkins/test-data/563fc294-b7e0-2735-0ff8-3c56829b5c0e/.tmp/data/np1/table1/782f10edf2656749139885c85c29f2c1/recovered.edits] 2023-07-22 07:10:56,651 DEBUG [HFileArchiver-1] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:45035/user/jenkins/test-data/563fc294-b7e0-2735-0ff8-3c56829b5c0e/.tmp/data/np1/table1/782f10edf2656749139885c85c29f2c1/recovered.edits/4.seqid to hdfs://localhost:45035/user/jenkins/test-data/563fc294-b7e0-2735-0ff8-3c56829b5c0e/archive/data/np1/table1/782f10edf2656749139885c85c29f2c1/recovered.edits/4.seqid 2023-07-22 07:10:56,652 DEBUG [HFileArchiver-1] backup.HFileArchiver(596): Deleted hdfs://localhost:45035/user/jenkins/test-data/563fc294-b7e0-2735-0ff8-3c56829b5c0e/.tmp/data/np1/table1/782f10edf2656749139885c85c29f2c1 2023-07-22 07:10:56,652 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived np1:table1 regions 2023-07-22 07:10:56,655 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=23, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=np1:table1 2023-07-22 07:10:56,656 WARN [PEWorker-3] procedure.DeleteTableProcedure(384): Deleting some vestigial 1 rows of np1:table1 from hbase:meta 2023-07-22 07:10:56,658 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(421): Removing 'np1:table1' descriptor. 2023-07-22 07:10:56,660 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=23, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=np1:table1 2023-07-22 07:10:56,660 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(411): Removing 'np1:table1' from region states. 2023-07-22 07:10:56,660 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"np1:table1,,1690009855105.782f10edf2656749139885c85c29f2c1.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1690009856660"}]},"ts":"9223372036854775807"} 2023-07-22 07:10:56,662 INFO [PEWorker-3] hbase.MetaTableAccessor(1788): Deleted 1 regions from META 2023-07-22 07:10:56,662 DEBUG [PEWorker-3] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => 782f10edf2656749139885c85c29f2c1, NAME => 'np1:table1,,1690009855105.782f10edf2656749139885c85c29f2c1.', STARTKEY => '', ENDKEY => ''}] 2023-07-22 07:10:56,662 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(415): Marking 'np1:table1' as deleted. 2023-07-22 07:10:56,662 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"np1:table1","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1690009856662"}]},"ts":"9223372036854775807"} 2023-07-22 07:10:56,663 INFO [PEWorker-3] hbase.MetaTableAccessor(1658): Deleted table np1:table1 state from META 2023-07-22 07:10:56,666 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(130): Finished pid=23, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=np1:table1 2023-07-22 07:10:56,668 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=23, state=SUCCESS; DeleteTableProcedure table=np1:table1 in 31 msec 2023-07-22 07:10:56,745 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34769] master.MasterRpcServices(1230): Checking to see if procedure is done pid=23 2023-07-22 07:10:56,745 INFO [Listener at localhost/37479] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: np1:table1, procId: 23 completed 2023-07-22 07:10:56,750 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34769] master.HMaster$17(3086): Client=jenkins//172.31.14.131 delete np1 2023-07-22 07:10:56,756 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34769] procedure2.ProcedureExecutor(1029): Stored pid=24, state=RUNNABLE:DELETE_NAMESPACE_PREPARE; DeleteNamespaceProcedure, namespace=np1 2023-07-22 07:10:56,758 INFO [PEWorker-4] procedure.DeleteNamespaceProcedure(73): pid=24, state=RUNNABLE:DELETE_NAMESPACE_PREPARE, locked=true; DeleteNamespaceProcedure, namespace=np1 2023-07-22 07:10:56,760 INFO [PEWorker-4] procedure.DeleteNamespaceProcedure(73): pid=24, state=RUNNABLE:DELETE_NAMESPACE_DELETE_FROM_NS_TABLE, locked=true; DeleteNamespaceProcedure, namespace=np1 2023-07-22 07:10:56,762 INFO [PEWorker-4] procedure.DeleteNamespaceProcedure(73): pid=24, state=RUNNABLE:DELETE_NAMESPACE_REMOVE_FROM_ZK, locked=true; DeleteNamespaceProcedure, namespace=np1 2023-07-22 07:10:56,763 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34769] master.MasterRpcServices(1230): Checking to see if procedure is done pid=24 2023-07-22 07:10:56,764 DEBUG [Listener at localhost/37479-EventThread] zookeeper.ZKWatcher(600): master:34769-0x1018bde5f1c0000, quorum=127.0.0.1:56037, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/namespace/np1 2023-07-22 07:10:56,764 DEBUG [Listener at localhost/37479-EventThread] zookeeper.ZKWatcher(600): master:34769-0x1018bde5f1c0000, quorum=127.0.0.1:56037, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-22 07:10:56,764 INFO [PEWorker-4] procedure.DeleteNamespaceProcedure(73): pid=24, state=RUNNABLE:DELETE_NAMESPACE_DELETE_DIRECTORIES, locked=true; DeleteNamespaceProcedure, namespace=np1 2023-07-22 07:10:56,766 INFO [PEWorker-4] procedure.DeleteNamespaceProcedure(73): pid=24, state=RUNNABLE:DELETE_NAMESPACE_REMOVE_NAMESPACE_QUOTA, locked=true; DeleteNamespaceProcedure, namespace=np1 2023-07-22 07:10:56,767 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=24, state=SUCCESS; DeleteNamespaceProcedure, namespace=np1 in 16 msec 2023-07-22 07:10:56,864 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34769] master.MasterRpcServices(1230): Checking to see if procedure is done pid=24 2023-07-22 07:10:56,864 INFO [Listener at localhost/37479] hbase.HBaseTestingUtility(1286): Shutting down minicluster 2023-07-22 07:10:56,864 INFO [Listener at localhost/37479] client.ConnectionImplementation(1979): Closing master protocol: MasterService 2023-07-22 07:10:56,865 DEBUG [Listener at localhost/37479] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x305d778f to 127.0.0.1:56037 2023-07-22 07:10:56,865 DEBUG [Listener at localhost/37479] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-22 07:10:56,865 DEBUG [Listener at localhost/37479] util.JVMClusterUtil(237): Shutting down HBase Cluster 2023-07-22 07:10:56,865 DEBUG [Listener at localhost/37479] util.JVMClusterUtil(257): Found active master hash=1986161473, stopped=false 2023-07-22 07:10:56,865 DEBUG [Listener at localhost/37479] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint 2023-07-22 07:10:56,865 DEBUG [Listener at localhost/37479] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver 2023-07-22 07:10:56,865 DEBUG [Listener at localhost/37479] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.quotas.MasterQuotasObserver 2023-07-22 07:10:56,865 INFO [Listener at localhost/37479] master.ServerManager(901): Cluster shutdown requested of master=jenkins-hbase4.apache.org,34769,1690009852974 2023-07-22 07:10:56,867 DEBUG [Listener at localhost/37479-EventThread] zookeeper.ZKWatcher(600): regionserver:33331-0x1018bde5f1c0001, quorum=127.0.0.1:56037, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-22 07:10:56,867 DEBUG [Listener at localhost/37479-EventThread] zookeeper.ZKWatcher(600): regionserver:45671-0x1018bde5f1c0002, quorum=127.0.0.1:56037, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-22 07:10:56,867 DEBUG [Listener at localhost/37479-EventThread] zookeeper.ZKWatcher(600): regionserver:41875-0x1018bde5f1c0003, quorum=127.0.0.1:56037, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-22 07:10:56,867 INFO [Listener at localhost/37479] procedure2.ProcedureExecutor(629): Stopping 2023-07-22 07:10:56,867 DEBUG [Listener at localhost/37479-EventThread] zookeeper.ZKWatcher(600): master:34769-0x1018bde5f1c0000, quorum=127.0.0.1:56037, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-22 07:10:56,867 DEBUG [Listener at localhost/37479-EventThread] zookeeper.ZKWatcher(600): master:34769-0x1018bde5f1c0000, quorum=127.0.0.1:56037, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-22 07:10:56,870 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:45671-0x1018bde5f1c0002, quorum=127.0.0.1:56037, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-22 07:10:56,870 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:41875-0x1018bde5f1c0003, quorum=127.0.0.1:56037, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-22 07:10:56,870 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:34769-0x1018bde5f1c0000, quorum=127.0.0.1:56037, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-22 07:10:56,874 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:33331-0x1018bde5f1c0001, quorum=127.0.0.1:56037, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-22 07:10:56,874 DEBUG [Listener at localhost/37479] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x59e053e0 to 127.0.0.1:56037 2023-07-22 07:10:56,875 DEBUG [Listener at localhost/37479] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-22 07:10:56,875 INFO [Listener at localhost/37479] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,33331,1690009853171' ***** 2023-07-22 07:10:56,875 INFO [Listener at localhost/37479] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-22 07:10:56,875 INFO [Listener at localhost/37479] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,45671,1690009853385' ***** 2023-07-22 07:10:56,875 INFO [Listener at localhost/37479] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-22 07:10:56,875 INFO [RS:0;jenkins-hbase4:33331] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-22 07:10:56,875 INFO [RS:1;jenkins-hbase4:45671] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-22 07:10:56,875 INFO [Listener at localhost/37479] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,41875,1690009853586' ***** 2023-07-22 07:10:56,876 INFO [Listener at localhost/37479] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-22 07:10:56,877 INFO [RS:2;jenkins-hbase4:41875] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-22 07:10:56,887 INFO [RS:0;jenkins-hbase4:33331] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@3ad44ca8{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-22 07:10:56,887 INFO [RS:1;jenkins-hbase4:45671] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@2ca11e70{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-22 07:10:56,887 INFO [RS:0;jenkins-hbase4:33331] server.AbstractConnector(383): Stopped ServerConnector@6c4b4518{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-22 07:10:56,887 INFO [RS:2;jenkins-hbase4:41875] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@2b9ac065{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-22 07:10:56,887 INFO [RS:0;jenkins-hbase4:33331] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-22 07:10:56,888 INFO [RS:1;jenkins-hbase4:45671] server.AbstractConnector(383): Stopped ServerConnector@36c8a553{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-22 07:10:56,888 INFO [RS:2;jenkins-hbase4:41875] server.AbstractConnector(383): Stopped ServerConnector@125559c0{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-22 07:10:56,888 INFO [RS:1;jenkins-hbase4:45671] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-22 07:10:56,888 INFO [RS:0;jenkins-hbase4:33331] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@7e73adce{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-22 07:10:56,888 INFO [RS:2;jenkins-hbase4:41875] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-22 07:10:56,890 INFO [RS:0;jenkins-hbase4:33331] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@4c16c29d{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/3e189d44-8ba9-11ba-8a1b-a3392a171034/hadoop.log.dir/,STOPPED} 2023-07-22 07:10:56,890 INFO [RS:1;jenkins-hbase4:45671] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@4fbf3e9c{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-22 07:10:56,891 INFO [RS:2;jenkins-hbase4:41875] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@53586b{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-22 07:10:56,891 INFO [RS:1;jenkins-hbase4:45671] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@ff264a2{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/3e189d44-8ba9-11ba-8a1b-a3392a171034/hadoop.log.dir/,STOPPED} 2023-07-22 07:10:56,891 INFO [RS:2;jenkins-hbase4:41875] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@e8a3f97{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/3e189d44-8ba9-11ba-8a1b-a3392a171034/hadoop.log.dir/,STOPPED} 2023-07-22 07:10:56,891 INFO [RS:0;jenkins-hbase4:33331] regionserver.HeapMemoryManager(220): Stopping 2023-07-22 07:10:56,891 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-22 07:10:56,891 INFO [RS:0;jenkins-hbase4:33331] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-22 07:10:56,891 INFO [RS:0;jenkins-hbase4:33331] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-22 07:10:56,891 INFO [RS:1;jenkins-hbase4:45671] regionserver.HeapMemoryManager(220): Stopping 2023-07-22 07:10:56,892 INFO [RS:1;jenkins-hbase4:45671] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-22 07:10:56,892 INFO [RS:1;jenkins-hbase4:45671] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-22 07:10:56,892 INFO [RS:1;jenkins-hbase4:45671] regionserver.HRegionServer(3305): Received CLOSE for be1d70bd233ac7904719ee079f09c06e 2023-07-22 07:10:56,893 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-22 07:10:56,891 INFO [RS:0;jenkins-hbase4:33331] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,33331,1690009853171 2023-07-22 07:10:56,893 DEBUG [RS:0;jenkins-hbase4:33331] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x1557a427 to 127.0.0.1:56037 2023-07-22 07:10:56,893 DEBUG [RS:0;jenkins-hbase4:33331] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-22 07:10:56,893 INFO [RS:0;jenkins-hbase4:33331] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,33331,1690009853171; all regions closed. 2023-07-22 07:10:56,893 DEBUG [RS:0;jenkins-hbase4:33331] quotas.QuotaCache(100): Stopping QuotaRefresherChore chore. 2023-07-22 07:10:56,893 INFO [RS:1;jenkins-hbase4:45671] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,45671,1690009853385 2023-07-22 07:10:56,894 INFO [RS:2;jenkins-hbase4:41875] regionserver.HeapMemoryManager(220): Stopping 2023-07-22 07:10:56,894 DEBUG [RS:1;jenkins-hbase4:45671] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x3345ab29 to 127.0.0.1:56037 2023-07-22 07:10:56,894 INFO [RS:2;jenkins-hbase4:41875] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-22 07:10:56,894 DEBUG [RS:1;jenkins-hbase4:45671] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-22 07:10:56,894 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-22 07:10:56,894 INFO [RS:2;jenkins-hbase4:41875] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-22 07:10:56,895 INFO [RS:2;jenkins-hbase4:41875] regionserver.HRegionServer(3305): Received CLOSE for b4f294dd788c6b0c6ff4b3a331724b82 2023-07-22 07:10:56,894 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing be1d70bd233ac7904719ee079f09c06e, disabling compactions & flushes 2023-07-22 07:10:56,894 INFO [RS:1;jenkins-hbase4:45671] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-22 07:10:56,895 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1690009854569.be1d70bd233ac7904719ee079f09c06e. 2023-07-22 07:10:56,895 INFO [RS:1;jenkins-hbase4:45671] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-22 07:10:56,895 INFO [RS:1;jenkins-hbase4:45671] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-22 07:10:56,895 INFO [RS:2;jenkins-hbase4:41875] regionserver.HRegionServer(3305): Received CLOSE for 4993ed070e736ec9d6fc2c02caacf0a4 2023-07-22 07:10:56,895 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1690009854569.be1d70bd233ac7904719ee079f09c06e. 2023-07-22 07:10:56,896 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1690009854569.be1d70bd233ac7904719ee079f09c06e. after waiting 0 ms 2023-07-22 07:10:56,896 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1690009854569.be1d70bd233ac7904719ee079f09c06e. 2023-07-22 07:10:56,896 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing be1d70bd233ac7904719ee079f09c06e 1/1 column families, dataSize=585 B heapSize=1.04 KB 2023-07-22 07:10:56,896 INFO [RS:2;jenkins-hbase4:41875] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,41875,1690009853586 2023-07-22 07:10:56,896 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing b4f294dd788c6b0c6ff4b3a331724b82, disabling compactions & flushes 2023-07-22 07:10:56,895 INFO [RS:1;jenkins-hbase4:45671] regionserver.HRegionServer(3305): Received CLOSE for 1588230740 2023-07-22 07:10:56,897 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:quota,,1690009854911.b4f294dd788c6b0c6ff4b3a331724b82. 2023-07-22 07:10:56,897 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:quota,,1690009854911.b4f294dd788c6b0c6ff4b3a331724b82. 2023-07-22 07:10:56,897 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:quota,,1690009854911.b4f294dd788c6b0c6ff4b3a331724b82. after waiting 0 ms 2023-07-22 07:10:56,897 DEBUG [RS:2;jenkins-hbase4:41875] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x54f63bf8 to 127.0.0.1:56037 2023-07-22 07:10:56,897 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:quota,,1690009854911.b4f294dd788c6b0c6ff4b3a331724b82. 2023-07-22 07:10:56,897 DEBUG [RS:2;jenkins-hbase4:41875] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-22 07:10:56,897 INFO [RS:2;jenkins-hbase4:41875] regionserver.HRegionServer(1474): Waiting on 2 regions to close 2023-07-22 07:10:56,897 DEBUG [RS:2;jenkins-hbase4:41875] regionserver.HRegionServer(1478): Online Regions={b4f294dd788c6b0c6ff4b3a331724b82=hbase:quota,,1690009854911.b4f294dd788c6b0c6ff4b3a331724b82., 4993ed070e736ec9d6fc2c02caacf0a4=hbase:namespace,,1690009854549.4993ed070e736ec9d6fc2c02caacf0a4.} 2023-07-22 07:10:56,898 DEBUG [RS:2;jenkins-hbase4:41875] regionserver.HRegionServer(1504): Waiting on 4993ed070e736ec9d6fc2c02caacf0a4, b4f294dd788c6b0c6ff4b3a331724b82 2023-07-22 07:10:56,902 INFO [RS:1;jenkins-hbase4:45671] regionserver.HRegionServer(1474): Waiting on 2 regions to close 2023-07-22 07:10:56,902 DEBUG [RS:1;jenkins-hbase4:45671] regionserver.HRegionServer(1478): Online Regions={1588230740=hbase:meta,,1.1588230740, be1d70bd233ac7904719ee079f09c06e=hbase:rsgroup,,1690009854569.be1d70bd233ac7904719ee079f09c06e.} 2023-07-22 07:10:56,904 DEBUG [RS:1;jenkins-hbase4:45671] regionserver.HRegionServer(1504): Waiting on 1588230740, be1d70bd233ac7904719ee079f09c06e 2023-07-22 07:10:56,908 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-22 07:10:56,908 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-22 07:10:56,908 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-22 07:10:56,908 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-22 07:10:56,908 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-22 07:10:56,908 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=5.89 KB heapSize=11.09 KB 2023-07-22 07:10:56,910 DEBUG [RS:0;jenkins-hbase4:33331] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/563fc294-b7e0-2735-0ff8-3c56829b5c0e/oldWALs 2023-07-22 07:10:56,910 INFO [RS:0;jenkins-hbase4:33331] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C33331%2C1690009853171:(num 1690009854270) 2023-07-22 07:10:56,910 DEBUG [RS:0;jenkins-hbase4:33331] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-22 07:10:56,910 INFO [RS:0;jenkins-hbase4:33331] regionserver.LeaseManager(133): Closed leases 2023-07-22 07:10:56,910 INFO [RS:0;jenkins-hbase4:33331] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-07-22 07:10:56,910 INFO [RS:0;jenkins-hbase4:33331] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-22 07:10:56,910 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-22 07:10:56,910 INFO [RS:0;jenkins-hbase4:33331] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-22 07:10:56,911 INFO [RS:0;jenkins-hbase4:33331] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-22 07:10:56,911 INFO [RS:0;jenkins-hbase4:33331] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:33331 2023-07-22 07:10:56,916 DEBUG [Listener at localhost/37479-EventThread] zookeeper.ZKWatcher(600): master:34769-0x1018bde5f1c0000, quorum=127.0.0.1:56037, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-22 07:10:56,916 DEBUG [Listener at localhost/37479-EventThread] zookeeper.ZKWatcher(600): regionserver:33331-0x1018bde5f1c0001, quorum=127.0.0.1:56037, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,33331,1690009853171 2023-07-22 07:10:56,916 DEBUG [Listener at localhost/37479-EventThread] zookeeper.ZKWatcher(600): regionserver:45671-0x1018bde5f1c0002, quorum=127.0.0.1:56037, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,33331,1690009853171 2023-07-22 07:10:56,916 DEBUG [Listener at localhost/37479-EventThread] zookeeper.ZKWatcher(600): regionserver:41875-0x1018bde5f1c0003, quorum=127.0.0.1:56037, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,33331,1690009853171 2023-07-22 07:10:56,916 DEBUG [Listener at localhost/37479-EventThread] zookeeper.ZKWatcher(600): regionserver:41875-0x1018bde5f1c0003, quorum=127.0.0.1:56037, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-22 07:10:56,916 DEBUG [Listener at localhost/37479-EventThread] zookeeper.ZKWatcher(600): regionserver:33331-0x1018bde5f1c0001, quorum=127.0.0.1:56037, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-22 07:10:56,916 DEBUG [Listener at localhost/37479-EventThread] zookeeper.ZKWatcher(600): regionserver:45671-0x1018bde5f1c0002, quorum=127.0.0.1:56037, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-22 07:10:56,916 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,33331,1690009853171] 2023-07-22 07:10:56,916 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,33331,1690009853171; numProcessing=1 2023-07-22 07:10:56,917 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:45035/user/jenkins/test-data/563fc294-b7e0-2735-0ff8-3c56829b5c0e/data/hbase/quota/b4f294dd788c6b0c6ff4b3a331724b82/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-22 07:10:56,921 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:quota,,1690009854911.b4f294dd788c6b0c6ff4b3a331724b82. 2023-07-22 07:10:56,921 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for b4f294dd788c6b0c6ff4b3a331724b82: 2023-07-22 07:10:56,921 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:quota,,1690009854911.b4f294dd788c6b0c6ff4b3a331724b82. 2023-07-22 07:10:56,922 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 4993ed070e736ec9d6fc2c02caacf0a4, disabling compactions & flushes 2023-07-22 07:10:56,922 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1690009854549.4993ed070e736ec9d6fc2c02caacf0a4. 2023-07-22 07:10:56,922 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1690009854549.4993ed070e736ec9d6fc2c02caacf0a4. 2023-07-22 07:10:56,922 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1690009854549.4993ed070e736ec9d6fc2c02caacf0a4. after waiting 0 ms 2023-07-22 07:10:56,922 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1690009854549.4993ed070e736ec9d6fc2c02caacf0a4. 2023-07-22 07:10:56,922 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 4993ed070e736ec9d6fc2c02caacf0a4 1/1 column families, dataSize=215 B heapSize=776 B 2023-07-22 07:10:56,923 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,33331,1690009853171 already deleted, retry=false 2023-07-22 07:10:56,923 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,33331,1690009853171 expired; onlineServers=2 2023-07-22 07:10:56,925 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-22 07:10:56,926 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-22 07:10:56,929 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-22 07:10:56,940 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=585 B at sequenceid=7 (bloomFilter=true), to=hdfs://localhost:45035/user/jenkins/test-data/563fc294-b7e0-2735-0ff8-3c56829b5c0e/data/hbase/rsgroup/be1d70bd233ac7904719ee079f09c06e/.tmp/m/590bf270025848ccaac3833991c9ffe8 2023-07-22 07:10:56,947 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:45035/user/jenkins/test-data/563fc294-b7e0-2735-0ff8-3c56829b5c0e/data/hbase/rsgroup/be1d70bd233ac7904719ee079f09c06e/.tmp/m/590bf270025848ccaac3833991c9ffe8 as hdfs://localhost:45035/user/jenkins/test-data/563fc294-b7e0-2735-0ff8-3c56829b5c0e/data/hbase/rsgroup/be1d70bd233ac7904719ee079f09c06e/m/590bf270025848ccaac3833991c9ffe8 2023-07-22 07:10:56,953 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:45035/user/jenkins/test-data/563fc294-b7e0-2735-0ff8-3c56829b5c0e/data/hbase/rsgroup/be1d70bd233ac7904719ee079f09c06e/m/590bf270025848ccaac3833991c9ffe8, entries=1, sequenceid=7, filesize=4.9 K 2023-07-22 07:10:56,954 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=215 B at sequenceid=8 (bloomFilter=true), to=hdfs://localhost:45035/user/jenkins/test-data/563fc294-b7e0-2735-0ff8-3c56829b5c0e/data/hbase/namespace/4993ed070e736ec9d6fc2c02caacf0a4/.tmp/info/8d98aefce7f84b698c52d278ae0a85e5 2023-07-22 07:10:56,956 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~585 B/585, heapSize ~1.02 KB/1048, currentSize=0 B/0 for be1d70bd233ac7904719ee079f09c06e in 60ms, sequenceid=7, compaction requested=false 2023-07-22 07:10:56,957 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:rsgroup' 2023-07-22 07:10:56,957 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=5.26 KB at sequenceid=31 (bloomFilter=false), to=hdfs://localhost:45035/user/jenkins/test-data/563fc294-b7e0-2735-0ff8-3c56829b5c0e/data/hbase/meta/1588230740/.tmp/info/2f63a9c004074834821a5c25d1c8b914 2023-07-22 07:10:56,964 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 8d98aefce7f84b698c52d278ae0a85e5 2023-07-22 07:10:56,965 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:45035/user/jenkins/test-data/563fc294-b7e0-2735-0ff8-3c56829b5c0e/data/hbase/namespace/4993ed070e736ec9d6fc2c02caacf0a4/.tmp/info/8d98aefce7f84b698c52d278ae0a85e5 as hdfs://localhost:45035/user/jenkins/test-data/563fc294-b7e0-2735-0ff8-3c56829b5c0e/data/hbase/namespace/4993ed070e736ec9d6fc2c02caacf0a4/info/8d98aefce7f84b698c52d278ae0a85e5 2023-07-22 07:10:56,970 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 8d98aefce7f84b698c52d278ae0a85e5 2023-07-22 07:10:56,971 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:45035/user/jenkins/test-data/563fc294-b7e0-2735-0ff8-3c56829b5c0e/data/hbase/namespace/4993ed070e736ec9d6fc2c02caacf0a4/info/8d98aefce7f84b698c52d278ae0a85e5, entries=3, sequenceid=8, filesize=5.0 K 2023-07-22 07:10:56,972 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:45035/user/jenkins/test-data/563fc294-b7e0-2735-0ff8-3c56829b5c0e/data/hbase/rsgroup/be1d70bd233ac7904719ee079f09c06e/recovered.edits/10.seqid, newMaxSeqId=10, maxSeqId=1 2023-07-22 07:10:56,972 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~215 B/215, heapSize ~760 B/760, currentSize=0 B/0 for 4993ed070e736ec9d6fc2c02caacf0a4 in 50ms, sequenceid=8, compaction requested=false 2023-07-22 07:10:56,973 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:namespace' 2023-07-22 07:10:56,973 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-22 07:10:56,974 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1690009854569.be1d70bd233ac7904719ee079f09c06e. 2023-07-22 07:10:56,974 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for be1d70bd233ac7904719ee079f09c06e: 2023-07-22 07:10:56,974 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:rsgroup,,1690009854569.be1d70bd233ac7904719ee079f09c06e. 2023-07-22 07:10:56,977 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 2f63a9c004074834821a5c25d1c8b914 2023-07-22 07:10:56,979 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:45035/user/jenkins/test-data/563fc294-b7e0-2735-0ff8-3c56829b5c0e/data/hbase/namespace/4993ed070e736ec9d6fc2c02caacf0a4/recovered.edits/11.seqid, newMaxSeqId=11, maxSeqId=1 2023-07-22 07:10:56,980 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:namespace,,1690009854549.4993ed070e736ec9d6fc2c02caacf0a4. 2023-07-22 07:10:56,980 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 4993ed070e736ec9d6fc2c02caacf0a4: 2023-07-22 07:10:56,980 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:namespace,,1690009854549.4993ed070e736ec9d6fc2c02caacf0a4. 2023-07-22 07:10:56,993 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=90 B at sequenceid=31 (bloomFilter=false), to=hdfs://localhost:45035/user/jenkins/test-data/563fc294-b7e0-2735-0ff8-3c56829b5c0e/data/hbase/meta/1588230740/.tmp/rep_barrier/0b32dfd2425143f0926d65b24f33c677 2023-07-22 07:10:56,999 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 0b32dfd2425143f0926d65b24f33c677 2023-07-22 07:10:57,013 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=562 B at sequenceid=31 (bloomFilter=false), to=hdfs://localhost:45035/user/jenkins/test-data/563fc294-b7e0-2735-0ff8-3c56829b5c0e/data/hbase/meta/1588230740/.tmp/table/ec15627b02d541c1ac1f96c4b9673d73 2023-07-22 07:10:57,019 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for ec15627b02d541c1ac1f96c4b9673d73 2023-07-22 07:10:57,020 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:45035/user/jenkins/test-data/563fc294-b7e0-2735-0ff8-3c56829b5c0e/data/hbase/meta/1588230740/.tmp/info/2f63a9c004074834821a5c25d1c8b914 as hdfs://localhost:45035/user/jenkins/test-data/563fc294-b7e0-2735-0ff8-3c56829b5c0e/data/hbase/meta/1588230740/info/2f63a9c004074834821a5c25d1c8b914 2023-07-22 07:10:57,024 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 2f63a9c004074834821a5c25d1c8b914 2023-07-22 07:10:57,025 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:45035/user/jenkins/test-data/563fc294-b7e0-2735-0ff8-3c56829b5c0e/data/hbase/meta/1588230740/info/2f63a9c004074834821a5c25d1c8b914, entries=32, sequenceid=31, filesize=8.5 K 2023-07-22 07:10:57,025 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:45035/user/jenkins/test-data/563fc294-b7e0-2735-0ff8-3c56829b5c0e/data/hbase/meta/1588230740/.tmp/rep_barrier/0b32dfd2425143f0926d65b24f33c677 as hdfs://localhost:45035/user/jenkins/test-data/563fc294-b7e0-2735-0ff8-3c56829b5c0e/data/hbase/meta/1588230740/rep_barrier/0b32dfd2425143f0926d65b24f33c677 2023-07-22 07:10:57,030 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 0b32dfd2425143f0926d65b24f33c677 2023-07-22 07:10:57,030 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:45035/user/jenkins/test-data/563fc294-b7e0-2735-0ff8-3c56829b5c0e/data/hbase/meta/1588230740/rep_barrier/0b32dfd2425143f0926d65b24f33c677, entries=1, sequenceid=31, filesize=4.9 K 2023-07-22 07:10:57,031 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:45035/user/jenkins/test-data/563fc294-b7e0-2735-0ff8-3c56829b5c0e/data/hbase/meta/1588230740/.tmp/table/ec15627b02d541c1ac1f96c4b9673d73 as hdfs://localhost:45035/user/jenkins/test-data/563fc294-b7e0-2735-0ff8-3c56829b5c0e/data/hbase/meta/1588230740/table/ec15627b02d541c1ac1f96c4b9673d73 2023-07-22 07:10:57,039 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for ec15627b02d541c1ac1f96c4b9673d73 2023-07-22 07:10:57,039 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:45035/user/jenkins/test-data/563fc294-b7e0-2735-0ff8-3c56829b5c0e/data/hbase/meta/1588230740/table/ec15627b02d541c1ac1f96c4b9673d73, entries=8, sequenceid=31, filesize=5.2 K 2023-07-22 07:10:57,040 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~5.89 KB/6036, heapSize ~11.05 KB/11312, currentSize=0 B/0 for 1588230740 in 132ms, sequenceid=31, compaction requested=false 2023-07-22 07:10:57,040 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:meta' 2023-07-22 07:10:57,049 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:45035/user/jenkins/test-data/563fc294-b7e0-2735-0ff8-3c56829b5c0e/data/hbase/meta/1588230740/recovered.edits/34.seqid, newMaxSeqId=34, maxSeqId=1 2023-07-22 07:10:57,050 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-22 07:10:57,050 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-22 07:10:57,050 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-22 07:10:57,050 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:meta,,1.1588230740 2023-07-22 07:10:57,066 DEBUG [Listener at localhost/37479-EventThread] zookeeper.ZKWatcher(600): regionserver:33331-0x1018bde5f1c0001, quorum=127.0.0.1:56037, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-22 07:10:57,066 INFO [RS:0;jenkins-hbase4:33331] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,33331,1690009853171; zookeeper connection closed. 2023-07-22 07:10:57,066 DEBUG [Listener at localhost/37479-EventThread] zookeeper.ZKWatcher(600): regionserver:33331-0x1018bde5f1c0001, quorum=127.0.0.1:56037, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-22 07:10:57,067 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@373e7323] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@373e7323 2023-07-22 07:10:57,098 INFO [RS:2;jenkins-hbase4:41875] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,41875,1690009853586; all regions closed. 2023-07-22 07:10:57,098 DEBUG [RS:2;jenkins-hbase4:41875] quotas.QuotaCache(100): Stopping QuotaRefresherChore chore. 2023-07-22 07:10:57,104 DEBUG [RS:2;jenkins-hbase4:41875] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/563fc294-b7e0-2735-0ff8-3c56829b5c0e/oldWALs 2023-07-22 07:10:57,104 INFO [RS:1;jenkins-hbase4:45671] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,45671,1690009853385; all regions closed. 2023-07-22 07:10:57,104 DEBUG [RS:1;jenkins-hbase4:45671] quotas.QuotaCache(100): Stopping QuotaRefresherChore chore. 2023-07-22 07:10:57,104 INFO [RS:2;jenkins-hbase4:41875] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C41875%2C1690009853586:(num 1690009854272) 2023-07-22 07:10:57,104 DEBUG [RS:2;jenkins-hbase4:41875] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-22 07:10:57,104 INFO [RS:2;jenkins-hbase4:41875] regionserver.LeaseManager(133): Closed leases 2023-07-22 07:10:57,105 INFO [RS:2;jenkins-hbase4:41875] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-07-22 07:10:57,105 INFO [RS:2;jenkins-hbase4:41875] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-22 07:10:57,105 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-22 07:10:57,105 INFO [RS:2;jenkins-hbase4:41875] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-22 07:10:57,105 INFO [RS:2;jenkins-hbase4:41875] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-22 07:10:57,106 INFO [RS:2;jenkins-hbase4:41875] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:41875 2023-07-22 07:10:57,111 DEBUG [Listener at localhost/37479-EventThread] zookeeper.ZKWatcher(600): regionserver:41875-0x1018bde5f1c0003, quorum=127.0.0.1:56037, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,41875,1690009853586 2023-07-22 07:10:57,111 DEBUG [Listener at localhost/37479-EventThread] zookeeper.ZKWatcher(600): regionserver:45671-0x1018bde5f1c0002, quorum=127.0.0.1:56037, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,41875,1690009853586 2023-07-22 07:10:57,111 DEBUG [Listener at localhost/37479-EventThread] zookeeper.ZKWatcher(600): master:34769-0x1018bde5f1c0000, quorum=127.0.0.1:56037, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-22 07:10:57,114 DEBUG [RS:1;jenkins-hbase4:45671] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/563fc294-b7e0-2735-0ff8-3c56829b5c0e/oldWALs 2023-07-22 07:10:57,114 INFO [RS:1;jenkins-hbase4:45671] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C45671%2C1690009853385.meta:.meta(num 1690009854476) 2023-07-22 07:10:57,114 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,41875,1690009853586] 2023-07-22 07:10:57,114 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,41875,1690009853586; numProcessing=2 2023-07-22 07:10:57,116 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,41875,1690009853586 already deleted, retry=false 2023-07-22 07:10:57,116 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,41875,1690009853586 expired; onlineServers=1 2023-07-22 07:10:57,118 WARN [Close-WAL-Writer-0] asyncfs.FanOutOneBlockAsyncDFSOutputHelper(641): complete file /user/jenkins/test-data/563fc294-b7e0-2735-0ff8-3c56829b5c0e/WALs/jenkins-hbase4.apache.org,45671,1690009853385/jenkins-hbase4.apache.org%2C45671%2C1690009853385.1690009854271 not finished, retry = 0 2023-07-22 07:10:57,118 INFO [regionserver/jenkins-hbase4:0.Chore.1] hbase.ScheduledChore(146): Chore: MemstoreFlusherChore was stopped 2023-07-22 07:10:57,118 INFO [regionserver/jenkins-hbase4:0.Chore.1] hbase.ScheduledChore(146): Chore: CompactionChecker was stopped 2023-07-22 07:10:57,220 DEBUG [RS:1;jenkins-hbase4:45671] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/563fc294-b7e0-2735-0ff8-3c56829b5c0e/oldWALs 2023-07-22 07:10:57,220 INFO [RS:1;jenkins-hbase4:45671] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C45671%2C1690009853385:(num 1690009854271) 2023-07-22 07:10:57,220 DEBUG [RS:1;jenkins-hbase4:45671] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-22 07:10:57,220 INFO [RS:1;jenkins-hbase4:45671] regionserver.LeaseManager(133): Closed leases 2023-07-22 07:10:57,221 INFO [RS:1;jenkins-hbase4:45671] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-07-22 07:10:57,221 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-22 07:10:57,222 INFO [RS:1;jenkins-hbase4:45671] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:45671 2023-07-22 07:10:57,225 DEBUG [Listener at localhost/37479-EventThread] zookeeper.ZKWatcher(600): regionserver:45671-0x1018bde5f1c0002, quorum=127.0.0.1:56037, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,45671,1690009853385 2023-07-22 07:10:57,225 DEBUG [Listener at localhost/37479-EventThread] zookeeper.ZKWatcher(600): master:34769-0x1018bde5f1c0000, quorum=127.0.0.1:56037, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-22 07:10:57,226 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,45671,1690009853385] 2023-07-22 07:10:57,226 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,45671,1690009853385; numProcessing=3 2023-07-22 07:10:57,228 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,45671,1690009853385 already deleted, retry=false 2023-07-22 07:10:57,228 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,45671,1690009853385 expired; onlineServers=0 2023-07-22 07:10:57,228 INFO [RegionServerTracker-0] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,34769,1690009852974' ***** 2023-07-22 07:10:57,228 INFO [RegionServerTracker-0] regionserver.HRegionServer(2311): STOPPED: Cluster shutdown set; onlineServer=0 2023-07-22 07:10:57,229 DEBUG [M:0;jenkins-hbase4:34769] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@725de155, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-22 07:10:57,229 INFO [M:0;jenkins-hbase4:34769] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-22 07:10:57,230 DEBUG [Listener at localhost/37479-EventThread] zookeeper.ZKWatcher(600): master:34769-0x1018bde5f1c0000, quorum=127.0.0.1:56037, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/master 2023-07-22 07:10:57,230 DEBUG [Listener at localhost/37479-EventThread] zookeeper.ZKWatcher(600): master:34769-0x1018bde5f1c0000, quorum=127.0.0.1:56037, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-22 07:10:57,231 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:34769-0x1018bde5f1c0000, quorum=127.0.0.1:56037, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-22 07:10:57,231 INFO [M:0;jenkins-hbase4:34769] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@46b1fc36{master,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/master} 2023-07-22 07:10:57,231 INFO [M:0;jenkins-hbase4:34769] server.AbstractConnector(383): Stopped ServerConnector@6bab3cdc{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-22 07:10:57,231 INFO [M:0;jenkins-hbase4:34769] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-22 07:10:57,232 INFO [M:0;jenkins-hbase4:34769] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@15f599c4{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-22 07:10:57,232 INFO [M:0;jenkins-hbase4:34769] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@6cfc84e3{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/3e189d44-8ba9-11ba-8a1b-a3392a171034/hadoop.log.dir/,STOPPED} 2023-07-22 07:10:57,232 INFO [M:0;jenkins-hbase4:34769] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,34769,1690009852974 2023-07-22 07:10:57,232 INFO [M:0;jenkins-hbase4:34769] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,34769,1690009852974; all regions closed. 2023-07-22 07:10:57,232 DEBUG [M:0;jenkins-hbase4:34769] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-22 07:10:57,232 INFO [M:0;jenkins-hbase4:34769] master.HMaster(1491): Stopping master jetty server 2023-07-22 07:10:57,233 INFO [M:0;jenkins-hbase4:34769] server.AbstractConnector(383): Stopped ServerConnector@66b08be9{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-22 07:10:57,233 DEBUG [M:0;jenkins-hbase4:34769] cleaner.LogCleaner(198): Cancelling LogCleaner 2023-07-22 07:10:57,233 WARN [OldWALsCleaner-0] cleaner.LogCleaner(186): Interrupted while cleaning old WALs, will try to clean it next round. Exiting. 2023-07-22 07:10:57,233 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1690009854009] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1690009854009,5,FailOnTimeoutGroup] 2023-07-22 07:10:57,233 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1690009854032] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1690009854032,5,FailOnTimeoutGroup] 2023-07-22 07:10:57,233 DEBUG [M:0;jenkins-hbase4:34769] cleaner.HFileCleaner(317): Stopping file delete threads 2023-07-22 07:10:57,235 INFO [M:0;jenkins-hbase4:34769] master.MasterMobCompactionThread(168): Waiting for Mob Compaction Thread to finish... 2023-07-22 07:10:57,235 INFO [M:0;jenkins-hbase4:34769] master.MasterMobCompactionThread(168): Waiting for Region Server Mob Compaction Thread to finish... 2023-07-22 07:10:57,235 INFO [M:0;jenkins-hbase4:34769] hbase.ChoreService(369): Chore service for: master/jenkins-hbase4:0 had [ScheduledChore name=QuotaObserverChore, period=60000, unit=MILLISECONDS] on shutdown 2023-07-22 07:10:57,235 DEBUG [M:0;jenkins-hbase4:34769] master.HMaster(1512): Stopping service threads 2023-07-22 07:10:57,235 INFO [M:0;jenkins-hbase4:34769] procedure2.RemoteProcedureDispatcher(119): Stopping procedure remote dispatcher 2023-07-22 07:10:57,236 ERROR [M:0;jenkins-hbase4:34769] procedure2.ProcedureExecutor(653): ThreadGroup java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] contains running threads; null: See STDOUT java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] 2023-07-22 07:10:57,236 INFO [M:0;jenkins-hbase4:34769] region.RegionProcedureStore(113): Stopping the Region Procedure Store, isAbort=false 2023-07-22 07:10:57,236 DEBUG [normalizer-worker-0] normalizer.RegionNormalizerWorker(174): interrupt detected. terminating. 2023-07-22 07:10:57,236 DEBUG [M:0;jenkins-hbase4:34769] zookeeper.ZKUtil(398): master:34769-0x1018bde5f1c0000, quorum=127.0.0.1:56037, baseZNode=/hbase Unable to get data of znode /hbase/master because node does not exist (not an error) 2023-07-22 07:10:57,236 WARN [M:0;jenkins-hbase4:34769] master.ActiveMasterManager(326): Failed get of master address: java.io.IOException: Can't get master address from ZooKeeper; znode data == null 2023-07-22 07:10:57,236 INFO [M:0;jenkins-hbase4:34769] assignment.AssignmentManager(315): Stopping assignment manager 2023-07-22 07:10:57,237 INFO [M:0;jenkins-hbase4:34769] region.MasterRegion(167): Closing local region {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, isAbort=false 2023-07-22 07:10:57,237 DEBUG [M:0;jenkins-hbase4:34769] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-07-22 07:10:57,237 INFO [M:0;jenkins-hbase4:34769] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-22 07:10:57,237 DEBUG [M:0;jenkins-hbase4:34769] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-22 07:10:57,237 DEBUG [M:0;jenkins-hbase4:34769] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-07-22 07:10:57,237 DEBUG [M:0;jenkins-hbase4:34769] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-22 07:10:57,238 INFO [M:0;jenkins-hbase4:34769] regionserver.HRegion(2745): Flushing 1595e783b53d99cd5eef43b6debb2682 1/1 column families, dataSize=93.00 KB heapSize=109.15 KB 2023-07-22 07:10:57,256 INFO [M:0;jenkins-hbase4:34769] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=93.00 KB at sequenceid=194 (bloomFilter=true), to=hdfs://localhost:45035/user/jenkins/test-data/563fc294-b7e0-2735-0ff8-3c56829b5c0e/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/830d0b194f3b490893a1987c6df3cc02 2023-07-22 07:10:57,261 DEBUG [M:0;jenkins-hbase4:34769] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:45035/user/jenkins/test-data/563fc294-b7e0-2735-0ff8-3c56829b5c0e/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/830d0b194f3b490893a1987c6df3cc02 as hdfs://localhost:45035/user/jenkins/test-data/563fc294-b7e0-2735-0ff8-3c56829b5c0e/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/830d0b194f3b490893a1987c6df3cc02 2023-07-22 07:10:57,266 INFO [M:0;jenkins-hbase4:34769] regionserver.HStore(1080): Added hdfs://localhost:45035/user/jenkins/test-data/563fc294-b7e0-2735-0ff8-3c56829b5c0e/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/830d0b194f3b490893a1987c6df3cc02, entries=24, sequenceid=194, filesize=12.4 K 2023-07-22 07:10:57,269 INFO [M:0;jenkins-hbase4:34769] regionserver.HRegion(2948): Finished flush of dataSize ~93.00 KB/95231, heapSize ~109.13 KB/111752, currentSize=0 B/0 for 1595e783b53d99cd5eef43b6debb2682 in 32ms, sequenceid=194, compaction requested=false 2023-07-22 07:10:57,270 INFO [M:0;jenkins-hbase4:34769] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-22 07:10:57,270 DEBUG [M:0;jenkins-hbase4:34769] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-22 07:10:57,274 INFO [M:0;jenkins-hbase4:34769] flush.MasterFlushTableProcedureManager(83): stop: server shutting down. 2023-07-22 07:10:57,274 INFO [master:store-WAL-Roller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-22 07:10:57,274 INFO [M:0;jenkins-hbase4:34769] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:34769 2023-07-22 07:10:57,276 DEBUG [M:0;jenkins-hbase4:34769] zookeeper.RecoverableZooKeeper(172): Node /hbase/rs/jenkins-hbase4.apache.org,34769,1690009852974 already deleted, retry=false 2023-07-22 07:10:57,568 DEBUG [Listener at localhost/37479-EventThread] zookeeper.ZKWatcher(600): master:34769-0x1018bde5f1c0000, quorum=127.0.0.1:56037, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-22 07:10:57,568 INFO [M:0;jenkins-hbase4:34769] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,34769,1690009852974; zookeeper connection closed. 2023-07-22 07:10:57,568 DEBUG [Listener at localhost/37479-EventThread] zookeeper.ZKWatcher(600): master:34769-0x1018bde5f1c0000, quorum=127.0.0.1:56037, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-22 07:10:57,668 DEBUG [Listener at localhost/37479-EventThread] zookeeper.ZKWatcher(600): regionserver:45671-0x1018bde5f1c0002, quorum=127.0.0.1:56037, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-22 07:10:57,669 DEBUG [Listener at localhost/37479-EventThread] zookeeper.ZKWatcher(600): regionserver:45671-0x1018bde5f1c0002, quorum=127.0.0.1:56037, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-22 07:10:57,669 INFO [RS:1;jenkins-hbase4:45671] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,45671,1690009853385; zookeeper connection closed. 2023-07-22 07:10:57,669 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@51cc2e0f] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@51cc2e0f 2023-07-22 07:10:57,769 DEBUG [Listener at localhost/37479-EventThread] zookeeper.ZKWatcher(600): regionserver:41875-0x1018bde5f1c0003, quorum=127.0.0.1:56037, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-22 07:10:57,769 INFO [RS:2;jenkins-hbase4:41875] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,41875,1690009853586; zookeeper connection closed. 2023-07-22 07:10:57,769 DEBUG [Listener at localhost/37479-EventThread] zookeeper.ZKWatcher(600): regionserver:41875-0x1018bde5f1c0003, quorum=127.0.0.1:56037, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-22 07:10:57,769 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@3f1b63c1] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@3f1b63c1 2023-07-22 07:10:57,769 INFO [Listener at localhost/37479] util.JVMClusterUtil(335): Shutdown of 1 master(s) and 3 regionserver(s) complete 2023-07-22 07:10:57,770 WARN [Listener at localhost/37479] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-22 07:10:57,773 INFO [Listener at localhost/37479] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-22 07:10:57,878 WARN [BP-1888693318-172.31.14.131-1690009852048 heartbeating to localhost/127.0.0.1:45035] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-22 07:10:57,878 WARN [BP-1888693318-172.31.14.131-1690009852048 heartbeating to localhost/127.0.0.1:45035] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1888693318-172.31.14.131-1690009852048 (Datanode Uuid fdbcbc37-f430-471d-bd54-bf528deae911) service to localhost/127.0.0.1:45035 2023-07-22 07:10:57,878 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/3e189d44-8ba9-11ba-8a1b-a3392a171034/cluster_27d71ade-ddaa-10e5-068b-4e91bdc3e5c4/dfs/data/data5/current/BP-1888693318-172.31.14.131-1690009852048] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-22 07:10:57,879 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/3e189d44-8ba9-11ba-8a1b-a3392a171034/cluster_27d71ade-ddaa-10e5-068b-4e91bdc3e5c4/dfs/data/data6/current/BP-1888693318-172.31.14.131-1690009852048] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-22 07:10:57,881 WARN [Listener at localhost/37479] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-22 07:10:57,884 INFO [Listener at localhost/37479] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-22 07:10:57,987 WARN [BP-1888693318-172.31.14.131-1690009852048 heartbeating to localhost/127.0.0.1:45035] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-22 07:10:57,988 WARN [BP-1888693318-172.31.14.131-1690009852048 heartbeating to localhost/127.0.0.1:45035] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1888693318-172.31.14.131-1690009852048 (Datanode Uuid 3ee58b69-ca83-43cd-9465-a8785ed8b27f) service to localhost/127.0.0.1:45035 2023-07-22 07:10:57,989 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/3e189d44-8ba9-11ba-8a1b-a3392a171034/cluster_27d71ade-ddaa-10e5-068b-4e91bdc3e5c4/dfs/data/data3/current/BP-1888693318-172.31.14.131-1690009852048] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-22 07:10:57,989 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/3e189d44-8ba9-11ba-8a1b-a3392a171034/cluster_27d71ade-ddaa-10e5-068b-4e91bdc3e5c4/dfs/data/data4/current/BP-1888693318-172.31.14.131-1690009852048] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-22 07:10:57,990 WARN [Listener at localhost/37479] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-22 07:10:57,994 INFO [Listener at localhost/37479] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-22 07:10:58,100 WARN [BP-1888693318-172.31.14.131-1690009852048 heartbeating to localhost/127.0.0.1:45035] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-22 07:10:58,100 WARN [BP-1888693318-172.31.14.131-1690009852048 heartbeating to localhost/127.0.0.1:45035] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1888693318-172.31.14.131-1690009852048 (Datanode Uuid 601ca87e-2465-4a2d-a0c5-3f987c29b362) service to localhost/127.0.0.1:45035 2023-07-22 07:10:58,101 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/3e189d44-8ba9-11ba-8a1b-a3392a171034/cluster_27d71ade-ddaa-10e5-068b-4e91bdc3e5c4/dfs/data/data1/current/BP-1888693318-172.31.14.131-1690009852048] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-22 07:10:58,101 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/3e189d44-8ba9-11ba-8a1b-a3392a171034/cluster_27d71ade-ddaa-10e5-068b-4e91bdc3e5c4/dfs/data/data2/current/BP-1888693318-172.31.14.131-1690009852048] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-22 07:10:58,110 INFO [Listener at localhost/37479] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-22 07:10:58,232 INFO [Listener at localhost/37479] zookeeper.MiniZooKeeperCluster(344): Shutdown MiniZK cluster with all ZK servers 2023-07-22 07:10:58,273 INFO [Listener at localhost/37479] hbase.HBaseTestingUtility(1293): Minicluster is down 2023-07-22 07:10:58,273 INFO [Listener at localhost/37479] hbase.HBaseTestingUtility(1068): Starting up minicluster with option: StartMiniClusterOption{numMasters=1, masterClass=null, numRegionServers=3, rsPorts=, rsClass=null, numDataNodes=3, dataNodeHosts=null, numZkServers=1, createRootDir=false, createWALDir=false} 2023-07-22 07:10:58,273 INFO [Listener at localhost/37479] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.log.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/3e189d44-8ba9-11ba-8a1b-a3392a171034/hadoop.log.dir so I do NOT create it in target/test-data/5a1f5817-4556-8293-71dc-0238ce857818 2023-07-22 07:10:58,273 INFO [Listener at localhost/37479] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.tmp.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/3e189d44-8ba9-11ba-8a1b-a3392a171034/hadoop.tmp.dir so I do NOT create it in target/test-data/5a1f5817-4556-8293-71dc-0238ce857818 2023-07-22 07:10:58,274 INFO [Listener at localhost/37479] hbase.HBaseZKTestingUtility(82): Created new mini-cluster data directory: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/5a1f5817-4556-8293-71dc-0238ce857818/cluster_f0188540-95af-947d-8415-f9a1cf2f35cc, deleteOnExit=true 2023-07-22 07:10:58,274 INFO [Listener at localhost/37479] hbase.HBaseTestingUtility(1082): STARTING DFS 2023-07-22 07:10:58,274 INFO [Listener at localhost/37479] hbase.HBaseTestingUtility(772): Setting test.cache.data to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/5a1f5817-4556-8293-71dc-0238ce857818/test.cache.data in system properties and HBase conf 2023-07-22 07:10:58,274 INFO [Listener at localhost/37479] hbase.HBaseTestingUtility(772): Setting hadoop.tmp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/5a1f5817-4556-8293-71dc-0238ce857818/hadoop.tmp.dir in system properties and HBase conf 2023-07-22 07:10:58,274 INFO [Listener at localhost/37479] hbase.HBaseTestingUtility(772): Setting hadoop.log.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/5a1f5817-4556-8293-71dc-0238ce857818/hadoop.log.dir in system properties and HBase conf 2023-07-22 07:10:58,275 INFO [Listener at localhost/37479] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.local.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/5a1f5817-4556-8293-71dc-0238ce857818/mapreduce.cluster.local.dir in system properties and HBase conf 2023-07-22 07:10:58,275 INFO [Listener at localhost/37479] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.temp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/5a1f5817-4556-8293-71dc-0238ce857818/mapreduce.cluster.temp.dir in system properties and HBase conf 2023-07-22 07:10:58,275 INFO [Listener at localhost/37479] hbase.HBaseTestingUtility(759): read short circuit is OFF 2023-07-22 07:10:58,275 DEBUG [Listener at localhost/37479] fs.HFileSystem(308): The file system is not a DistributedFileSystem. Skipping on block location reordering 2023-07-22 07:10:58,275 INFO [Listener at localhost/37479] hbase.HBaseTestingUtility(772): Setting yarn.node-labels.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/5a1f5817-4556-8293-71dc-0238ce857818/yarn.node-labels.fs-store.root-dir in system properties and HBase conf 2023-07-22 07:10:58,276 INFO [Listener at localhost/37479] hbase.HBaseTestingUtility(772): Setting yarn.node-attribute.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/5a1f5817-4556-8293-71dc-0238ce857818/yarn.node-attribute.fs-store.root-dir in system properties and HBase conf 2023-07-22 07:10:58,276 INFO [Listener at localhost/37479] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.log-dirs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/5a1f5817-4556-8293-71dc-0238ce857818/yarn.nodemanager.log-dirs in system properties and HBase conf 2023-07-22 07:10:58,276 INFO [Listener at localhost/37479] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/5a1f5817-4556-8293-71dc-0238ce857818/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-07-22 07:10:58,276 INFO [Listener at localhost/37479] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.active-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/5a1f5817-4556-8293-71dc-0238ce857818/yarn.timeline-service.entity-group-fs-store.active-dir in system properties and HBase conf 2023-07-22 07:10:58,276 INFO [Listener at localhost/37479] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.done-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/5a1f5817-4556-8293-71dc-0238ce857818/yarn.timeline-service.entity-group-fs-store.done-dir in system properties and HBase conf 2023-07-22 07:10:58,276 INFO [Listener at localhost/37479] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/5a1f5817-4556-8293-71dc-0238ce857818/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-07-22 07:10:58,277 INFO [Listener at localhost/37479] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/5a1f5817-4556-8293-71dc-0238ce857818/dfs.journalnode.edits.dir in system properties and HBase conf 2023-07-22 07:10:58,277 INFO [Listener at localhost/37479] hbase.HBaseTestingUtility(772): Setting dfs.datanode.shared.file.descriptor.paths to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/5a1f5817-4556-8293-71dc-0238ce857818/dfs.datanode.shared.file.descriptor.paths in system properties and HBase conf 2023-07-22 07:10:58,277 INFO [Listener at localhost/37479] hbase.HBaseTestingUtility(772): Setting nfs.dump.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/5a1f5817-4556-8293-71dc-0238ce857818/nfs.dump.dir in system properties and HBase conf 2023-07-22 07:10:58,277 INFO [Listener at localhost/37479] hbase.HBaseTestingUtility(772): Setting java.io.tmpdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/5a1f5817-4556-8293-71dc-0238ce857818/java.io.tmpdir in system properties and HBase conf 2023-07-22 07:10:58,277 INFO [Listener at localhost/37479] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/5a1f5817-4556-8293-71dc-0238ce857818/dfs.journalnode.edits.dir in system properties and HBase conf 2023-07-22 07:10:58,277 INFO [Listener at localhost/37479] hbase.HBaseTestingUtility(772): Setting dfs.provided.aliasmap.inmemory.leveldb.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/5a1f5817-4556-8293-71dc-0238ce857818/dfs.provided.aliasmap.inmemory.leveldb.dir in system properties and HBase conf 2023-07-22 07:10:58,278 INFO [Listener at localhost/37479] hbase.HBaseTestingUtility(772): Setting fs.s3a.committer.staging.tmp.path to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/5a1f5817-4556-8293-71dc-0238ce857818/fs.s3a.committer.staging.tmp.path in system properties and HBase conf Formatting using clusterid: testClusterID 2023-07-22 07:10:58,282 WARN [Listener at localhost/37479] conf.Configuration(1701): No unit for dfs.heartbeat.interval(3) assuming SECONDS 2023-07-22 07:10:58,283 WARN [Listener at localhost/37479] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-07-22 07:10:58,329 WARN [Listener at localhost/37479] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-22 07:10:58,329 DEBUG [Listener at localhost/37479-EventThread] zookeeper.ZKWatcher(600): VerifyingRSGroupAdminClient-0x1018bde5f1c000a, quorum=127.0.0.1:56037, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Disconnected, path=null 2023-07-22 07:10:58,329 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(630): VerifyingRSGroupAdminClient-0x1018bde5f1c000a, quorum=127.0.0.1:56037, baseZNode=/hbase Received Disconnected from ZooKeeper, ignoring 2023-07-22 07:10:58,331 INFO [Listener at localhost/37479] log.Slf4jLog(67): jetty-6.1.26 2023-07-22 07:10:58,335 INFO [Listener at localhost/37479] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/hdfs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/5a1f5817-4556-8293-71dc-0238ce857818/java.io.tmpdir/Jetty_localhost_43507_hdfs____.mr3xvd/webapp 2023-07-22 07:10:58,428 INFO [Listener at localhost/37479] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:43507 2023-07-22 07:10:58,432 WARN [Listener at localhost/37479] conf.Configuration(1701): No unit for dfs.heartbeat.interval(3) assuming SECONDS 2023-07-22 07:10:58,433 WARN [Listener at localhost/37479] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-07-22 07:10:58,472 WARN [Listener at localhost/45267] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-22 07:10:58,486 WARN [Listener at localhost/45267] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-22 07:10:58,490 WARN [Listener at localhost/45267] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-22 07:10:58,491 INFO [Listener at localhost/45267] log.Slf4jLog(67): jetty-6.1.26 2023-07-22 07:10:58,496 INFO [Listener at localhost/45267] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/5a1f5817-4556-8293-71dc-0238ce857818/java.io.tmpdir/Jetty_localhost_38025_datanode____oaft91/webapp 2023-07-22 07:10:58,599 INFO [Listener at localhost/45267] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:38025 2023-07-22 07:10:58,610 WARN [Listener at localhost/45465] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-22 07:10:58,635 WARN [Listener at localhost/45465] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-22 07:10:58,638 WARN [Listener at localhost/45465] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-22 07:10:58,639 INFO [Listener at localhost/45465] log.Slf4jLog(67): jetty-6.1.26 2023-07-22 07:10:58,643 INFO [Listener at localhost/45465] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/5a1f5817-4556-8293-71dc-0238ce857818/java.io.tmpdir/Jetty_localhost_39487_datanode____.fbr1cc/webapp 2023-07-22 07:10:58,752 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x6a1e627243ffb365: Processing first storage report for DS-54862990-c4ed-4f54-b35e-aa588a06880e from datanode f1f095cc-1ee1-449e-9c51-f00bd4b30c9d 2023-07-22 07:10:58,752 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x6a1e627243ffb365: from storage DS-54862990-c4ed-4f54-b35e-aa588a06880e node DatanodeRegistration(127.0.0.1:46471, datanodeUuid=f1f095cc-1ee1-449e-9c51-f00bd4b30c9d, infoPort=40555, infoSecurePort=0, ipcPort=45465, storageInfo=lv=-57;cid=testClusterID;nsid=1106383679;c=1690009858285), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-22 07:10:58,752 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x6a1e627243ffb365: Processing first storage report for DS-9958af72-51d5-47e6-9ec7-485fb1aab954 from datanode f1f095cc-1ee1-449e-9c51-f00bd4b30c9d 2023-07-22 07:10:58,752 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x6a1e627243ffb365: from storage DS-9958af72-51d5-47e6-9ec7-485fb1aab954 node DatanodeRegistration(127.0.0.1:46471, datanodeUuid=f1f095cc-1ee1-449e-9c51-f00bd4b30c9d, infoPort=40555, infoSecurePort=0, ipcPort=45465, storageInfo=lv=-57;cid=testClusterID;nsid=1106383679;c=1690009858285), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-22 07:10:58,762 INFO [Listener at localhost/45465] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:39487 2023-07-22 07:10:58,791 WARN [Listener at localhost/46557] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-22 07:10:58,829 WARN [Listener at localhost/46557] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-22 07:10:58,834 WARN [Listener at localhost/46557] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-22 07:10:58,836 INFO [Listener at localhost/46557] log.Slf4jLog(67): jetty-6.1.26 2023-07-22 07:10:58,838 INFO [Listener at localhost/46557] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/5a1f5817-4556-8293-71dc-0238ce857818/java.io.tmpdir/Jetty_localhost_33171_datanode____.1zkdny/webapp 2023-07-22 07:10:58,934 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x2917f2fea182bf3b: Processing first storage report for DS-eba9c6db-5917-47d2-ac38-49a4525b7417 from datanode 0e8bc9f2-11d4-4158-906f-17e1bdd8ba6b 2023-07-22 07:10:58,934 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x2917f2fea182bf3b: from storage DS-eba9c6db-5917-47d2-ac38-49a4525b7417 node DatanodeRegistration(127.0.0.1:43305, datanodeUuid=0e8bc9f2-11d4-4158-906f-17e1bdd8ba6b, infoPort=37511, infoSecurePort=0, ipcPort=46557, storageInfo=lv=-57;cid=testClusterID;nsid=1106383679;c=1690009858285), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-22 07:10:58,934 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x2917f2fea182bf3b: Processing first storage report for DS-acd22f54-14a7-4099-98ed-2e39a2d14c98 from datanode 0e8bc9f2-11d4-4158-906f-17e1bdd8ba6b 2023-07-22 07:10:58,934 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x2917f2fea182bf3b: from storage DS-acd22f54-14a7-4099-98ed-2e39a2d14c98 node DatanodeRegistration(127.0.0.1:43305, datanodeUuid=0e8bc9f2-11d4-4158-906f-17e1bdd8ba6b, infoPort=37511, infoSecurePort=0, ipcPort=46557, storageInfo=lv=-57;cid=testClusterID;nsid=1106383679;c=1690009858285), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-22 07:10:58,967 INFO [Listener at localhost/46557] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:33171 2023-07-22 07:10:58,986 WARN [Listener at localhost/44075] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-22 07:10:59,139 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x2ad87457de8845f5: Processing first storage report for DS-569084ad-ab2f-4422-bbd3-3b09e4616e87 from datanode b99d1d45-9a0a-45e7-baea-14c6c4bcb0b9 2023-07-22 07:10:59,139 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x2ad87457de8845f5: from storage DS-569084ad-ab2f-4422-bbd3-3b09e4616e87 node DatanodeRegistration(127.0.0.1:40325, datanodeUuid=b99d1d45-9a0a-45e7-baea-14c6c4bcb0b9, infoPort=43039, infoSecurePort=0, ipcPort=44075, storageInfo=lv=-57;cid=testClusterID;nsid=1106383679;c=1690009858285), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-22 07:10:59,139 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x2ad87457de8845f5: Processing first storage report for DS-3a5517b4-3a84-4894-947e-8e5eb1950c3d from datanode b99d1d45-9a0a-45e7-baea-14c6c4bcb0b9 2023-07-22 07:10:59,139 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x2ad87457de8845f5: from storage DS-3a5517b4-3a84-4894-947e-8e5eb1950c3d node DatanodeRegistration(127.0.0.1:40325, datanodeUuid=b99d1d45-9a0a-45e7-baea-14c6c4bcb0b9, infoPort=43039, infoSecurePort=0, ipcPort=44075, storageInfo=lv=-57;cid=testClusterID;nsid=1106383679;c=1690009858285), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-22 07:10:59,215 DEBUG [Listener at localhost/44075] hbase.HBaseTestingUtility(649): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/5a1f5817-4556-8293-71dc-0238ce857818 2023-07-22 07:10:59,217 INFO [Listener at localhost/44075] zookeeper.MiniZooKeeperCluster(258): Started connectionTimeout=30000, dir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/5a1f5817-4556-8293-71dc-0238ce857818/cluster_f0188540-95af-947d-8415-f9a1cf2f35cc/zookeeper_0, clientPort=58374, secureClientPort=-1, dataDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/5a1f5817-4556-8293-71dc-0238ce857818/cluster_f0188540-95af-947d-8415-f9a1cf2f35cc/zookeeper_0/version-2, dataDirSize=424 dataLogDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/5a1f5817-4556-8293-71dc-0238ce857818/cluster_f0188540-95af-947d-8415-f9a1cf2f35cc/zookeeper_0/version-2, dataLogSize=424 tickTime=2000, maxClientCnxns=300, minSessionTimeout=4000, maxSessionTimeout=40000, serverId=0 2023-07-22 07:10:59,218 INFO [Listener at localhost/44075] zookeeper.MiniZooKeeperCluster(283): Started MiniZooKeeperCluster and ran 'stat' on client port=58374 2023-07-22 07:10:59,219 INFO [Listener at localhost/44075] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-22 07:10:59,220 INFO [Listener at localhost/44075] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-22 07:10:59,234 INFO [Listener at localhost/44075] util.FSUtils(471): Created version file at hdfs://localhost:45267/user/jenkins/test-data/bf469a2d-0740-951c-0d71-e8a7443854ab with version=8 2023-07-22 07:10:59,234 INFO [Listener at localhost/44075] hbase.HBaseTestingUtility(1408): The hbase.fs.tmp.dir is set to hdfs://localhost:40817/user/jenkins/test-data/03c5a9df-8bfd-ef75-fd2e-4ad959ae9666/hbase-staging 2023-07-22 07:10:59,235 DEBUG [Listener at localhost/44075] hbase.LocalHBaseCluster(134): Setting Master Port to random. 2023-07-22 07:10:59,235 DEBUG [Listener at localhost/44075] hbase.LocalHBaseCluster(141): Setting RegionServer Port to random. 2023-07-22 07:10:59,236 DEBUG [Listener at localhost/44075] hbase.LocalHBaseCluster(151): Setting RS InfoServer Port to random. 2023-07-22 07:10:59,236 DEBUG [Listener at localhost/44075] hbase.LocalHBaseCluster(159): Setting Master InfoServer Port to random. 2023-07-22 07:10:59,236 INFO [Listener at localhost/44075] client.ConnectionUtils(127): master/jenkins-hbase4:0 server-side Connection retries=45 2023-07-22 07:10:59,236 INFO [Listener at localhost/44075] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-22 07:10:59,237 INFO [Listener at localhost/44075] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-22 07:10:59,237 INFO [Listener at localhost/44075] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-22 07:10:59,237 INFO [Listener at localhost/44075] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-22 07:10:59,237 INFO [Listener at localhost/44075] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-22 07:10:59,237 INFO [Listener at localhost/44075] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.MasterService, hbase.pb.RegionServerStatusService, hbase.pb.LockService, hbase.pb.HbckService, hbase.pb.ClientMetaService, hbase.pb.ClientService, hbase.pb.AdminService 2023-07-22 07:10:59,238 INFO [Listener at localhost/44075] ipc.NettyRpcServer(120): Bind to /172.31.14.131:39207 2023-07-22 07:10:59,238 INFO [Listener at localhost/44075] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-22 07:10:59,239 INFO [Listener at localhost/44075] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-22 07:10:59,240 INFO [Listener at localhost/44075] zookeeper.RecoverableZooKeeper(93): Process identifier=master:39207 connecting to ZooKeeper ensemble=127.0.0.1:58374 2023-07-22 07:10:59,252 DEBUG [Listener at localhost/44075-EventThread] zookeeper.ZKWatcher(600): master:392070x0, quorum=127.0.0.1:58374, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-22 07:10:59,252 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): master:39207-0x1018bde77940000 connected 2023-07-22 07:10:59,265 DEBUG [Listener at localhost/44075] zookeeper.ZKUtil(164): master:39207-0x1018bde77940000, quorum=127.0.0.1:58374, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-22 07:10:59,266 DEBUG [Listener at localhost/44075] zookeeper.ZKUtil(164): master:39207-0x1018bde77940000, quorum=127.0.0.1:58374, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-22 07:10:59,266 DEBUG [Listener at localhost/44075] zookeeper.ZKUtil(164): master:39207-0x1018bde77940000, quorum=127.0.0.1:58374, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-22 07:10:59,266 DEBUG [Listener at localhost/44075] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=39207 2023-07-22 07:10:59,267 DEBUG [Listener at localhost/44075] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=39207 2023-07-22 07:10:59,267 DEBUG [Listener at localhost/44075] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=39207 2023-07-22 07:10:59,267 DEBUG [Listener at localhost/44075] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=39207 2023-07-22 07:10:59,267 DEBUG [Listener at localhost/44075] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=39207 2023-07-22 07:10:59,269 INFO [Listener at localhost/44075] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-22 07:10:59,269 INFO [Listener at localhost/44075] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-22 07:10:59,269 INFO [Listener at localhost/44075] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-22 07:10:59,270 INFO [Listener at localhost/44075] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context master 2023-07-22 07:10:59,270 INFO [Listener at localhost/44075] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-22 07:10:59,270 INFO [Listener at localhost/44075] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-22 07:10:59,270 INFO [Listener at localhost/44075] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-22 07:10:59,270 INFO [Listener at localhost/44075] http.HttpServer(1146): Jetty bound to port 44693 2023-07-22 07:10:59,271 INFO [Listener at localhost/44075] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-22 07:10:59,272 INFO [Listener at localhost/44075] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-22 07:10:59,272 INFO [Listener at localhost/44075] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@6ed2f93{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/5a1f5817-4556-8293-71dc-0238ce857818/hadoop.log.dir/,AVAILABLE} 2023-07-22 07:10:59,272 INFO [Listener at localhost/44075] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-22 07:10:59,273 INFO [Listener at localhost/44075] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@4cf54f13{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-22 07:10:59,397 INFO [Listener at localhost/44075] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-22 07:10:59,398 INFO [Listener at localhost/44075] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-22 07:10:59,399 INFO [Listener at localhost/44075] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-22 07:10:59,399 INFO [Listener at localhost/44075] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-22 07:10:59,400 INFO [Listener at localhost/44075] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-22 07:10:59,401 INFO [Listener at localhost/44075] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@43f53346{master,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/5a1f5817-4556-8293-71dc-0238ce857818/java.io.tmpdir/jetty-0_0_0_0-44693-hbase-server-2_4_18-SNAPSHOT_jar-_-any-4263253713260272952/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/master} 2023-07-22 07:10:59,403 INFO [Listener at localhost/44075] server.AbstractConnector(333): Started ServerConnector@371cb2cf{HTTP/1.1, (http/1.1)}{0.0.0.0:44693} 2023-07-22 07:10:59,403 INFO [Listener at localhost/44075] server.Server(415): Started @41809ms 2023-07-22 07:10:59,403 INFO [Listener at localhost/44075] master.HMaster(444): hbase.rootdir=hdfs://localhost:45267/user/jenkins/test-data/bf469a2d-0740-951c-0d71-e8a7443854ab, hbase.cluster.distributed=false 2023-07-22 07:10:59,417 INFO [Listener at localhost/44075] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-22 07:10:59,417 INFO [Listener at localhost/44075] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-22 07:10:59,418 INFO [Listener at localhost/44075] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-22 07:10:59,418 INFO [Listener at localhost/44075] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-22 07:10:59,418 INFO [Listener at localhost/44075] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-22 07:10:59,418 INFO [Listener at localhost/44075] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-22 07:10:59,418 INFO [Listener at localhost/44075] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-22 07:10:59,419 INFO [Listener at localhost/44075] ipc.NettyRpcServer(120): Bind to /172.31.14.131:38441 2023-07-22 07:10:59,419 INFO [Listener at localhost/44075] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-22 07:10:59,431 DEBUG [Listener at localhost/44075] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-22 07:10:59,432 INFO [Listener at localhost/44075] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-22 07:10:59,434 INFO [Listener at localhost/44075] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-22 07:10:59,435 INFO [Listener at localhost/44075] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:38441 connecting to ZooKeeper ensemble=127.0.0.1:58374 2023-07-22 07:10:59,441 DEBUG [Listener at localhost/44075-EventThread] zookeeper.ZKWatcher(600): regionserver:384410x0, quorum=127.0.0.1:58374, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-22 07:10:59,442 DEBUG [Listener at localhost/44075] zookeeper.ZKUtil(164): regionserver:384410x0, quorum=127.0.0.1:58374, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-22 07:10:59,444 DEBUG [Listener at localhost/44075] zookeeper.ZKUtil(164): regionserver:384410x0, quorum=127.0.0.1:58374, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-22 07:10:59,444 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:38441-0x1018bde77940001 connected 2023-07-22 07:10:59,445 DEBUG [Listener at localhost/44075] zookeeper.ZKUtil(164): regionserver:38441-0x1018bde77940001, quorum=127.0.0.1:58374, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-22 07:10:59,445 DEBUG [Listener at localhost/44075] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=38441 2023-07-22 07:10:59,446 DEBUG [Listener at localhost/44075] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=38441 2023-07-22 07:10:59,446 DEBUG [Listener at localhost/44075] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=38441 2023-07-22 07:10:59,446 DEBUG [Listener at localhost/44075] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=38441 2023-07-22 07:10:59,446 DEBUG [Listener at localhost/44075] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=38441 2023-07-22 07:10:59,448 INFO [Listener at localhost/44075] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-22 07:10:59,448 INFO [Listener at localhost/44075] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-22 07:10:59,449 INFO [Listener at localhost/44075] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-22 07:10:59,449 INFO [Listener at localhost/44075] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-22 07:10:59,449 INFO [Listener at localhost/44075] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-22 07:10:59,449 INFO [Listener at localhost/44075] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-22 07:10:59,449 INFO [Listener at localhost/44075] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-22 07:10:59,450 INFO [Listener at localhost/44075] http.HttpServer(1146): Jetty bound to port 44801 2023-07-22 07:10:59,450 INFO [Listener at localhost/44075] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-22 07:10:59,457 INFO [Listener at localhost/44075] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-22 07:10:59,457 INFO [Listener at localhost/44075] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@1161f604{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/5a1f5817-4556-8293-71dc-0238ce857818/hadoop.log.dir/,AVAILABLE} 2023-07-22 07:10:59,458 INFO [Listener at localhost/44075] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-22 07:10:59,458 INFO [Listener at localhost/44075] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@6247dd23{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-22 07:10:59,583 INFO [Listener at localhost/44075] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-22 07:10:59,583 INFO [Listener at localhost/44075] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-22 07:10:59,584 INFO [Listener at localhost/44075] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-22 07:10:59,584 INFO [Listener at localhost/44075] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-22 07:10:59,585 INFO [Listener at localhost/44075] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-22 07:10:59,585 INFO [Listener at localhost/44075] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@22643add{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/5a1f5817-4556-8293-71dc-0238ce857818/java.io.tmpdir/jetty-0_0_0_0-44801-hbase-server-2_4_18-SNAPSHOT_jar-_-any-8515507869225454241/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-22 07:10:59,587 INFO [Listener at localhost/44075] server.AbstractConnector(333): Started ServerConnector@57cc7d77{HTTP/1.1, (http/1.1)}{0.0.0.0:44801} 2023-07-22 07:10:59,587 INFO [Listener at localhost/44075] server.Server(415): Started @41992ms 2023-07-22 07:10:59,598 INFO [Listener at localhost/44075] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-22 07:10:59,598 INFO [Listener at localhost/44075] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-22 07:10:59,599 INFO [Listener at localhost/44075] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-22 07:10:59,599 INFO [Listener at localhost/44075] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-22 07:10:59,599 INFO [Listener at localhost/44075] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-22 07:10:59,599 INFO [Listener at localhost/44075] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-22 07:10:59,599 INFO [Listener at localhost/44075] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-22 07:10:59,600 INFO [Listener at localhost/44075] ipc.NettyRpcServer(120): Bind to /172.31.14.131:34227 2023-07-22 07:10:59,600 INFO [Listener at localhost/44075] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-22 07:10:59,601 DEBUG [Listener at localhost/44075] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-22 07:10:59,602 INFO [Listener at localhost/44075] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-22 07:10:59,603 INFO [Listener at localhost/44075] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-22 07:10:59,604 INFO [Listener at localhost/44075] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:34227 connecting to ZooKeeper ensemble=127.0.0.1:58374 2023-07-22 07:10:59,608 DEBUG [Listener at localhost/44075-EventThread] zookeeper.ZKWatcher(600): regionserver:342270x0, quorum=127.0.0.1:58374, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-22 07:10:59,609 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:34227-0x1018bde77940002 connected 2023-07-22 07:10:59,609 DEBUG [Listener at localhost/44075] zookeeper.ZKUtil(164): regionserver:34227-0x1018bde77940002, quorum=127.0.0.1:58374, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-22 07:10:59,610 DEBUG [Listener at localhost/44075] zookeeper.ZKUtil(164): regionserver:34227-0x1018bde77940002, quorum=127.0.0.1:58374, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-22 07:10:59,610 DEBUG [Listener at localhost/44075] zookeeper.ZKUtil(164): regionserver:34227-0x1018bde77940002, quorum=127.0.0.1:58374, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-22 07:10:59,610 DEBUG [Listener at localhost/44075] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=34227 2023-07-22 07:10:59,611 DEBUG [Listener at localhost/44075] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=34227 2023-07-22 07:10:59,613 DEBUG [Listener at localhost/44075] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=34227 2023-07-22 07:10:59,613 DEBUG [Listener at localhost/44075] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=34227 2023-07-22 07:10:59,614 DEBUG [Listener at localhost/44075] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=34227 2023-07-22 07:10:59,616 INFO [Listener at localhost/44075] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-22 07:10:59,616 INFO [Listener at localhost/44075] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-22 07:10:59,616 INFO [Listener at localhost/44075] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-22 07:10:59,617 INFO [Listener at localhost/44075] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-22 07:10:59,617 INFO [Listener at localhost/44075] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-22 07:10:59,617 INFO [Listener at localhost/44075] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-22 07:10:59,617 INFO [Listener at localhost/44075] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-22 07:10:59,617 INFO [Listener at localhost/44075] http.HttpServer(1146): Jetty bound to port 39473 2023-07-22 07:10:59,618 INFO [Listener at localhost/44075] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-22 07:10:59,619 INFO [Listener at localhost/44075] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-22 07:10:59,619 INFO [Listener at localhost/44075] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@1ada7561{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/5a1f5817-4556-8293-71dc-0238ce857818/hadoop.log.dir/,AVAILABLE} 2023-07-22 07:10:59,620 INFO [Listener at localhost/44075] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-22 07:10:59,620 INFO [Listener at localhost/44075] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@7f99f0c1{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-22 07:10:59,744 INFO [Listener at localhost/44075] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-22 07:10:59,744 INFO [Listener at localhost/44075] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-22 07:10:59,744 INFO [Listener at localhost/44075] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-22 07:10:59,745 INFO [Listener at localhost/44075] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-22 07:10:59,745 INFO [Listener at localhost/44075] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-22 07:10:59,746 INFO [Listener at localhost/44075] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@4874feda{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/5a1f5817-4556-8293-71dc-0238ce857818/java.io.tmpdir/jetty-0_0_0_0-39473-hbase-server-2_4_18-SNAPSHOT_jar-_-any-6592941647974844838/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-22 07:10:59,748 INFO [Listener at localhost/44075] server.AbstractConnector(333): Started ServerConnector@373e134e{HTTP/1.1, (http/1.1)}{0.0.0.0:39473} 2023-07-22 07:10:59,749 INFO [Listener at localhost/44075] server.Server(415): Started @42154ms 2023-07-22 07:10:59,763 INFO [Listener at localhost/44075] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-22 07:10:59,763 INFO [Listener at localhost/44075] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-22 07:10:59,763 INFO [Listener at localhost/44075] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-22 07:10:59,764 INFO [Listener at localhost/44075] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-22 07:10:59,764 INFO [Listener at localhost/44075] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-22 07:10:59,764 INFO [Listener at localhost/44075] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-22 07:10:59,764 INFO [Listener at localhost/44075] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-22 07:10:59,764 INFO [Listener at localhost/44075] ipc.NettyRpcServer(120): Bind to /172.31.14.131:44257 2023-07-22 07:10:59,765 INFO [Listener at localhost/44075] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-22 07:10:59,766 DEBUG [Listener at localhost/44075] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-22 07:10:59,767 INFO [Listener at localhost/44075] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-22 07:10:59,768 INFO [Listener at localhost/44075] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-22 07:10:59,768 INFO [Listener at localhost/44075] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:44257 connecting to ZooKeeper ensemble=127.0.0.1:58374 2023-07-22 07:10:59,772 DEBUG [Listener at localhost/44075-EventThread] zookeeper.ZKWatcher(600): regionserver:442570x0, quorum=127.0.0.1:58374, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-22 07:10:59,773 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:44257-0x1018bde77940003 connected 2023-07-22 07:10:59,773 DEBUG [Listener at localhost/44075] zookeeper.ZKUtil(164): regionserver:44257-0x1018bde77940003, quorum=127.0.0.1:58374, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-22 07:10:59,777 DEBUG [Listener at localhost/44075] zookeeper.ZKUtil(164): regionserver:44257-0x1018bde77940003, quorum=127.0.0.1:58374, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-22 07:10:59,777 DEBUG [Listener at localhost/44075] zookeeper.ZKUtil(164): regionserver:44257-0x1018bde77940003, quorum=127.0.0.1:58374, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-22 07:10:59,777 DEBUG [Listener at localhost/44075] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=44257 2023-07-22 07:10:59,778 DEBUG [Listener at localhost/44075] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=44257 2023-07-22 07:10:59,778 DEBUG [Listener at localhost/44075] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=44257 2023-07-22 07:10:59,778 DEBUG [Listener at localhost/44075] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=44257 2023-07-22 07:10:59,778 DEBUG [Listener at localhost/44075] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=44257 2023-07-22 07:10:59,780 INFO [Listener at localhost/44075] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-22 07:10:59,780 INFO [Listener at localhost/44075] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-22 07:10:59,780 INFO [Listener at localhost/44075] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-22 07:10:59,781 INFO [Listener at localhost/44075] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-22 07:10:59,781 INFO [Listener at localhost/44075] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-22 07:10:59,781 INFO [Listener at localhost/44075] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-22 07:10:59,781 INFO [Listener at localhost/44075] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-22 07:10:59,781 INFO [Listener at localhost/44075] http.HttpServer(1146): Jetty bound to port 33697 2023-07-22 07:10:59,781 INFO [Listener at localhost/44075] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-22 07:10:59,783 INFO [Listener at localhost/44075] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-22 07:10:59,783 INFO [Listener at localhost/44075] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@1f01ca24{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/5a1f5817-4556-8293-71dc-0238ce857818/hadoop.log.dir/,AVAILABLE} 2023-07-22 07:10:59,783 INFO [Listener at localhost/44075] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-22 07:10:59,783 INFO [Listener at localhost/44075] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@54a08fd6{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-22 07:10:59,899 INFO [Listener at localhost/44075] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-22 07:10:59,900 INFO [Listener at localhost/44075] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-22 07:10:59,900 INFO [Listener at localhost/44075] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-22 07:10:59,900 INFO [Listener at localhost/44075] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-22 07:10:59,901 INFO [Listener at localhost/44075] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-22 07:10:59,902 INFO [Listener at localhost/44075] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@2972cbd7{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/5a1f5817-4556-8293-71dc-0238ce857818/java.io.tmpdir/jetty-0_0_0_0-33697-hbase-server-2_4_18-SNAPSHOT_jar-_-any-6398673045005953955/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-22 07:10:59,903 INFO [Listener at localhost/44075] server.AbstractConnector(333): Started ServerConnector@53d5e86c{HTTP/1.1, (http/1.1)}{0.0.0.0:33697} 2023-07-22 07:10:59,903 INFO [Listener at localhost/44075] server.Server(415): Started @42309ms 2023-07-22 07:10:59,905 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-22 07:10:59,910 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.AbstractConnector(333): Started ServerConnector@780b01d8{HTTP/1.1, (http/1.1)}{0.0.0.0:38169} 2023-07-22 07:10:59,910 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.Server(415): Started @42315ms 2023-07-22 07:10:59,910 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2168): Adding backup master ZNode /hbase/backup-masters/jenkins-hbase4.apache.org,39207,1690009859236 2023-07-22 07:10:59,911 DEBUG [Listener at localhost/44075-EventThread] zookeeper.ZKWatcher(600): master:39207-0x1018bde77940000, quorum=127.0.0.1:58374, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-07-22 07:10:59,912 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:39207-0x1018bde77940000, quorum=127.0.0.1:58374, baseZNode=/hbase Set watcher on existing znode=/hbase/backup-masters/jenkins-hbase4.apache.org,39207,1690009859236 2023-07-22 07:10:59,914 DEBUG [Listener at localhost/44075-EventThread] zookeeper.ZKWatcher(600): regionserver:34227-0x1018bde77940002, quorum=127.0.0.1:58374, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-22 07:10:59,914 DEBUG [Listener at localhost/44075-EventThread] zookeeper.ZKWatcher(600): master:39207-0x1018bde77940000, quorum=127.0.0.1:58374, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-22 07:10:59,914 DEBUG [Listener at localhost/44075-EventThread] zookeeper.ZKWatcher(600): regionserver:38441-0x1018bde77940001, quorum=127.0.0.1:58374, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-22 07:10:59,915 DEBUG [Listener at localhost/44075-EventThread] zookeeper.ZKWatcher(600): master:39207-0x1018bde77940000, quorum=127.0.0.1:58374, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-22 07:10:59,914 DEBUG [Listener at localhost/44075-EventThread] zookeeper.ZKWatcher(600): regionserver:44257-0x1018bde77940003, quorum=127.0.0.1:58374, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-22 07:10:59,916 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:39207-0x1018bde77940000, quorum=127.0.0.1:58374, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-22 07:10:59,918 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(227): Deleting ZNode for /hbase/backup-masters/jenkins-hbase4.apache.org,39207,1690009859236 from backup master directory 2023-07-22 07:10:59,918 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): master:39207-0x1018bde77940000, quorum=127.0.0.1:58374, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-22 07:10:59,919 DEBUG [Listener at localhost/44075-EventThread] zookeeper.ZKWatcher(600): master:39207-0x1018bde77940000, quorum=127.0.0.1:58374, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/backup-masters/jenkins-hbase4.apache.org,39207,1690009859236 2023-07-22 07:10:59,919 WARN [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-22 07:10:59,919 DEBUG [Listener at localhost/44075-EventThread] zookeeper.ZKWatcher(600): master:39207-0x1018bde77940000, quorum=127.0.0.1:58374, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-07-22 07:10:59,919 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(237): Registered as active master=jenkins-hbase4.apache.org,39207,1690009859236 2023-07-22 07:10:59,939 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] util.FSUtils(620): Created cluster ID file at hdfs://localhost:45267/user/jenkins/test-data/bf469a2d-0740-951c-0d71-e8a7443854ab/hbase.id with ID: 6d321daf-8ef8-4f33-b81c-e1a0fa103c1e 2023-07-22 07:10:59,949 INFO [master/jenkins-hbase4:0:becomeActiveMaster] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-22 07:10:59,952 DEBUG [Listener at localhost/44075-EventThread] zookeeper.ZKWatcher(600): master:39207-0x1018bde77940000, quorum=127.0.0.1:58374, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-22 07:10:59,969 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ReadOnlyZKClient(139): Connect 0x1a2a3278 to 127.0.0.1:58374 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-22 07:10:59,974 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@24869c30, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-22 07:10:59,974 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(309): Create or load local region for table 'master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-22 07:10:59,975 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(132): Injected flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000 2023-07-22 07:10:59,975 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-22 07:10:59,976 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7693): Creating {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, under table dir hdfs://localhost:45267/user/jenkins/test-data/bf469a2d-0740-951c-0d71-e8a7443854ab/MasterData/data/master/store-tmp 2023-07-22 07:10:59,984 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-22 07:10:59,984 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-07-22 07:10:59,984 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-22 07:10:59,984 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-22 07:10:59,984 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-07-22 07:10:59,984 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-22 07:10:59,984 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-22 07:10:59,984 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-22 07:10:59,985 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(191): WALDir=hdfs://localhost:45267/user/jenkins/test-data/bf469a2d-0740-951c-0d71-e8a7443854ab/MasterData/WALs/jenkins-hbase4.apache.org,39207,1690009859236 2023-07-22 07:10:59,987 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C39207%2C1690009859236, suffix=, logDir=hdfs://localhost:45267/user/jenkins/test-data/bf469a2d-0740-951c-0d71-e8a7443854ab/MasterData/WALs/jenkins-hbase4.apache.org,39207,1690009859236, archiveDir=hdfs://localhost:45267/user/jenkins/test-data/bf469a2d-0740-951c-0d71-e8a7443854ab/MasterData/oldWALs, maxLogs=10 2023-07-22 07:11:00,010 DEBUG [RS-EventLoopGroup-15-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:43305,DS-eba9c6db-5917-47d2-ac38-49a4525b7417,DISK] 2023-07-22 07:11:00,010 DEBUG [RS-EventLoopGroup-15-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:46471,DS-54862990-c4ed-4f54-b35e-aa588a06880e,DISK] 2023-07-22 07:11:00,017 DEBUG [RS-EventLoopGroup-15-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:40325,DS-569084ad-ab2f-4422-bbd3-3b09e4616e87,DISK] 2023-07-22 07:11:00,022 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/bf469a2d-0740-951c-0d71-e8a7443854ab/MasterData/WALs/jenkins-hbase4.apache.org,39207,1690009859236/jenkins-hbase4.apache.org%2C39207%2C1690009859236.1690009859988 2023-07-22 07:11:00,023 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:43305,DS-eba9c6db-5917-47d2-ac38-49a4525b7417,DISK], DatanodeInfoWithStorage[127.0.0.1:40325,DS-569084ad-ab2f-4422-bbd3-3b09e4616e87,DISK], DatanodeInfoWithStorage[127.0.0.1:46471,DS-54862990-c4ed-4f54-b35e-aa588a06880e,DISK]] 2023-07-22 07:11:00,023 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7854): Opening region: {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''} 2023-07-22 07:11:00,023 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-22 07:11:00,023 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7894): checking encryption for 1595e783b53d99cd5eef43b6debb2682 2023-07-22 07:11:00,023 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7897): checking classloading for 1595e783b53d99cd5eef43b6debb2682 2023-07-22 07:11:00,024 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family proc of region 1595e783b53d99cd5eef43b6debb2682 2023-07-22 07:11:00,026 DEBUG [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:45267/user/jenkins/test-data/bf469a2d-0740-951c-0d71-e8a7443854ab/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc 2023-07-22 07:11:00,026 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1595e783b53d99cd5eef43b6debb2682 columnFamilyName proc 2023-07-22 07:11:00,027 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(310): Store=1595e783b53d99cd5eef43b6debb2682/proc, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-22 07:11:00,028 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:45267/user/jenkins/test-data/bf469a2d-0740-951c-0d71-e8a7443854ab/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-07-22 07:11:00,028 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:45267/user/jenkins/test-data/bf469a2d-0740-951c-0d71-e8a7443854ab/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-07-22 07:11:00,031 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1055): writing seq id for 1595e783b53d99cd5eef43b6debb2682 2023-07-22 07:11:00,033 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:45267/user/jenkins/test-data/bf469a2d-0740-951c-0d71-e8a7443854ab/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-22 07:11:00,033 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1072): Opened 1595e783b53d99cd5eef43b6debb2682; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10372367520, jitterRate=-0.033997997641563416}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-22 07:11:00,033 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(965): Region open journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-22 07:11:00,034 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(122): Constructor flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000, compactMin=4 2023-07-22 07:11:00,035 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(104): Starting the Region Procedure Store, number threads=5 2023-07-22 07:11:00,035 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(562): Starting 5 core workers (bigger of cpus/4 or 16) with max (burst) worker count=50 2023-07-22 07:11:00,035 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(255): Starting Region Procedure Store lease recovery... 2023-07-22 07:11:00,036 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(582): Recovered RegionProcedureStore lease in 0 msec 2023-07-22 07:11:00,036 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(596): Loaded RegionProcedureStore in 0 msec 2023-07-22 07:11:00,036 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.RemoteProcedureDispatcher(96): Instantiated, coreThreads=3 (allowCoreThreadTimeOut=true), queueMaxSize=32, operationDelay=150 2023-07-22 07:11:00,037 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(253): hbase:meta replica znodes: [] 2023-07-22 07:11:00,038 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.RegionServerTracker(124): Starting RegionServerTracker; 0 have existing ServerCrashProcedures, 0 possibly 'live' servers, and 0 'splitting'. 2023-07-22 07:11:00,039 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:39207-0x1018bde77940000, quorum=127.0.0.1:58374, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/balancer 2023-07-22 07:11:00,039 INFO [master/jenkins-hbase4:0:becomeActiveMaster] normalizer.RegionNormalizerWorker(118): Normalizer rate limit set to unlimited 2023-07-22 07:11:00,039 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:39207-0x1018bde77940000, quorum=127.0.0.1:58374, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/normalizer 2023-07-22 07:11:00,042 DEBUG [Listener at localhost/44075-EventThread] zookeeper.ZKWatcher(600): master:39207-0x1018bde77940000, quorum=127.0.0.1:58374, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-22 07:11:00,043 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:39207-0x1018bde77940000, quorum=127.0.0.1:58374, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/split 2023-07-22 07:11:00,043 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:39207-0x1018bde77940000, quorum=127.0.0.1:58374, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/merge 2023-07-22 07:11:00,044 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:39207-0x1018bde77940000, quorum=127.0.0.1:58374, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/snapshot-cleanup 2023-07-22 07:11:00,046 DEBUG [Listener at localhost/44075-EventThread] zookeeper.ZKWatcher(600): regionserver:34227-0x1018bde77940002, quorum=127.0.0.1:58374, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-22 07:11:00,046 DEBUG [Listener at localhost/44075-EventThread] zookeeper.ZKWatcher(600): master:39207-0x1018bde77940000, quorum=127.0.0.1:58374, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-22 07:11:00,046 DEBUG [Listener at localhost/44075-EventThread] zookeeper.ZKWatcher(600): regionserver:44257-0x1018bde77940003, quorum=127.0.0.1:58374, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-22 07:11:00,047 DEBUG [Listener at localhost/44075-EventThread] zookeeper.ZKWatcher(600): regionserver:38441-0x1018bde77940001, quorum=127.0.0.1:58374, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-22 07:11:00,047 DEBUG [Listener at localhost/44075-EventThread] zookeeper.ZKWatcher(600): master:39207-0x1018bde77940000, quorum=127.0.0.1:58374, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-22 07:11:00,047 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(744): Active/primary master=jenkins-hbase4.apache.org,39207,1690009859236, sessionid=0x1018bde77940000, setting cluster-up flag (Was=false) 2023-07-22 07:11:00,052 DEBUG [Listener at localhost/44075-EventThread] zookeeper.ZKWatcher(600): master:39207-0x1018bde77940000, quorum=127.0.0.1:58374, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-22 07:11:00,060 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/flush-table-proc/acquired, /hbase/flush-table-proc/reached, /hbase/flush-table-proc/abort 2023-07-22 07:11:00,061 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,39207,1690009859236 2023-07-22 07:11:00,064 DEBUG [Listener at localhost/44075-EventThread] zookeeper.ZKWatcher(600): master:39207-0x1018bde77940000, quorum=127.0.0.1:58374, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-22 07:11:00,068 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/online-snapshot/acquired, /hbase/online-snapshot/reached, /hbase/online-snapshot/abort 2023-07-22 07:11:00,069 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,39207,1690009859236 2023-07-22 07:11:00,070 WARN [master/jenkins-hbase4:0:becomeActiveMaster] snapshot.SnapshotManager(302): Couldn't delete working snapshot directory: hdfs://localhost:45267/user/jenkins/test-data/bf469a2d-0740-951c-0d71-e8a7443854ab/.hbase-snapshot/.tmp 2023-07-22 07:11:00,071 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2930): Registered master coprocessor service: service=RSGroupAdminService 2023-07-22 07:11:00,071 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] rsgroup.RSGroupInfoManagerImpl(537): Refreshing in Offline mode. 2023-07-22 07:11:00,071 INFO [master/jenkins-hbase4:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint loaded, priority=536870911. 2023-07-22 07:11:00,072 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,39207,1690009859236] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-22 07:11:00,072 INFO [master/jenkins-hbase4:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver loaded, priority=536870912. 2023-07-22 07:11:00,074 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT; InitMetaProcedure table=hbase:meta 2023-07-22 07:11:00,088 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-07-22 07:11:00,088 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-07-22 07:11:00,088 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-07-22 07:11:00,088 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-07-22 07:11:00,088 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_OPEN_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-22 07:11:00,088 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_CLOSE_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-22 07:11:00,088 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-22 07:11:00,088 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_META_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-22 07:11:00,088 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=M_LOG_REPLAY_OPS-master/jenkins-hbase4:0, corePoolSize=10, maxPoolSize=10 2023-07-22 07:11:00,088 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-22 07:11:00,088 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_MERGE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-22 07:11:00,088 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_TABLE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-22 07:11:00,092 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner; timeout=30000, timestamp=1690009890092 2023-07-22 07:11:00,092 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): log_cleaner Cleaner pool size is 1 2023-07-22 07:11:00,092 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveLogCleaner 2023-07-22 07:11:00,092 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.replication.master.ReplicationLogCleaner 2023-07-22 07:11:00,092 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreWALCleaner 2023-07-22 07:11:00,092 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveProcedureWALCleaner 2023-07-22 07:11:00,092 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.LogCleaner(148): Creating 1 old WALs cleaner threads 2023-07-22 07:11:00,093 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=LogsCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-07-22 07:11:00,093 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT, locked=true; InitMetaProcedure table=hbase:meta 2023-07-22 07:11:00,093 INFO [PEWorker-1] procedure.InitMetaProcedure(71): BOOTSTRAP: creating hbase:meta region 2023-07-22 07:11:00,094 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): hfile_cleaner Cleaner pool size is 2 2023-07-22 07:11:00,094 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreHFileCleaner 2023-07-22 07:11:00,094 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.HFileLinkCleaner 2023-07-22 07:11:00,095 INFO [PEWorker-1] util.FSTableDescriptors(128): Creating new hbase:meta table descriptor 'hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-07-22 07:11:00,099 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.snapshot.SnapshotHFileCleaner 2023-07-22 07:11:00,099 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveHFileCleaner 2023-07-22 07:11:00,100 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(242): Starting for large file=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1690009860099,5,FailOnTimeoutGroup] 2023-07-22 07:11:00,100 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(257): Starting for small files=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1690009860100,5,FailOnTimeoutGroup] 2023-07-22 07:11:00,100 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HFileCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-07-22 07:11:00,100 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1461): Reopening regions with very high storeFileRefCount is disabled. Provide threshold value > 0 for hbase.regions.recovery.store.file.ref.count to enable it. 2023-07-22 07:11:00,100 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=ReplicationBarrierCleaner, period=43200000, unit=MILLISECONDS is enabled. 2023-07-22 07:11:00,100 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=SnapshotCleaner, period=1800000, unit=MILLISECONDS is enabled. 2023-07-22 07:11:00,105 INFO [RS:1;jenkins-hbase4:34227] regionserver.HRegionServer(951): ClusterId : 6d321daf-8ef8-4f33-b81c-e1a0fa103c1e 2023-07-22 07:11:00,105 INFO [RS:0;jenkins-hbase4:38441] regionserver.HRegionServer(951): ClusterId : 6d321daf-8ef8-4f33-b81c-e1a0fa103c1e 2023-07-22 07:11:00,105 DEBUG [RS:1;jenkins-hbase4:34227] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-22 07:11:00,106 DEBUG [RS:0;jenkins-hbase4:38441] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-22 07:11:00,105 INFO [RS:2;jenkins-hbase4:44257] regionserver.HRegionServer(951): ClusterId : 6d321daf-8ef8-4f33-b81c-e1a0fa103c1e 2023-07-22 07:11:00,106 DEBUG [RS:2;jenkins-hbase4:44257] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-22 07:11:00,108 DEBUG [RS:1;jenkins-hbase4:34227] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-22 07:11:00,108 DEBUG [RS:1;jenkins-hbase4:34227] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-22 07:11:00,108 DEBUG [RS:0;jenkins-hbase4:38441] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-22 07:11:00,109 DEBUG [RS:0;jenkins-hbase4:38441] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-22 07:11:00,109 DEBUG [RS:2;jenkins-hbase4:44257] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-22 07:11:00,109 DEBUG [RS:2;jenkins-hbase4:44257] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-22 07:11:00,111 DEBUG [RS:1;jenkins-hbase4:34227] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-22 07:11:00,111 DEBUG [RS:2;jenkins-hbase4:44257] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-22 07:11:00,111 DEBUG [RS:0;jenkins-hbase4:38441] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-22 07:11:00,117 DEBUG [RS:1;jenkins-hbase4:34227] zookeeper.ReadOnlyZKClient(139): Connect 0x7d0bec28 to 127.0.0.1:58374 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-22 07:11:00,121 DEBUG [RS:0;jenkins-hbase4:38441] zookeeper.ReadOnlyZKClient(139): Connect 0x27b7efb5 to 127.0.0.1:58374 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-22 07:11:00,121 DEBUG [RS:2;jenkins-hbase4:44257] zookeeper.ReadOnlyZKClient(139): Connect 0x31d2ea36 to 127.0.0.1:58374 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-22 07:11:00,130 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost:45267/user/jenkins/test-data/bf469a2d-0740-951c-0d71-e8a7443854ab/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-07-22 07:11:00,131 INFO [PEWorker-1] util.FSTableDescriptors(135): Updated hbase:meta table descriptor to hdfs://localhost:45267/user/jenkins/test-data/bf469a2d-0740-951c-0d71-e8a7443854ab/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-07-22 07:11:00,131 INFO [PEWorker-1] regionserver.HRegion(7675): creating {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:45267/user/jenkins/test-data/bf469a2d-0740-951c-0d71-e8a7443854ab 2023-07-22 07:11:00,132 DEBUG [RS:1;jenkins-hbase4:34227] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@73cc0df9, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-22 07:11:00,133 DEBUG [RS:1;jenkins-hbase4:34227] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@77f38758, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-22 07:11:00,133 DEBUG [RS:0;jenkins-hbase4:38441] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@274daf1a, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-22 07:11:00,133 DEBUG [RS:0;jenkins-hbase4:38441] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@165a88e8, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-22 07:11:00,135 DEBUG [RS:2;jenkins-hbase4:44257] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@f490875, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-22 07:11:00,135 DEBUG [RS:2;jenkins-hbase4:44257] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@a97e19, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-22 07:11:00,145 DEBUG [RS:2;jenkins-hbase4:44257] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:2;jenkins-hbase4:44257 2023-07-22 07:11:00,145 INFO [RS:2;jenkins-hbase4:44257] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-22 07:11:00,145 INFO [RS:2;jenkins-hbase4:44257] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-22 07:11:00,145 DEBUG [RS:2;jenkins-hbase4:44257] regionserver.HRegionServer(1022): About to register with Master. 2023-07-22 07:11:00,146 INFO [RS:2;jenkins-hbase4:44257] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,39207,1690009859236 with isa=jenkins-hbase4.apache.org/172.31.14.131:44257, startcode=1690009859763 2023-07-22 07:11:00,146 DEBUG [RS:0;jenkins-hbase4:38441] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:0;jenkins-hbase4:38441 2023-07-22 07:11:00,146 INFO [RS:0;jenkins-hbase4:38441] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-22 07:11:00,146 INFO [RS:0;jenkins-hbase4:38441] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-22 07:11:00,146 DEBUG [RS:2;jenkins-hbase4:44257] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-22 07:11:00,146 DEBUG [RS:0;jenkins-hbase4:38441] regionserver.HRegionServer(1022): About to register with Master. 2023-07-22 07:11:00,147 INFO [RS:0;jenkins-hbase4:38441] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,39207,1690009859236 with isa=jenkins-hbase4.apache.org/172.31.14.131:38441, startcode=1690009859417 2023-07-22 07:11:00,147 DEBUG [RS:0;jenkins-hbase4:38441] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-22 07:11:00,147 DEBUG [RS:1;jenkins-hbase4:34227] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:1;jenkins-hbase4:34227 2023-07-22 07:11:00,147 INFO [RS:1;jenkins-hbase4:34227] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-22 07:11:00,147 INFO [RS:1;jenkins-hbase4:34227] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-22 07:11:00,147 DEBUG [RS:1;jenkins-hbase4:34227] regionserver.HRegionServer(1022): About to register with Master. 2023-07-22 07:11:00,148 INFO [RS:1;jenkins-hbase4:34227] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,39207,1690009859236 with isa=jenkins-hbase4.apache.org/172.31.14.131:34227, startcode=1690009859598 2023-07-22 07:11:00,148 DEBUG [RS:1;jenkins-hbase4:34227] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-22 07:11:00,198 INFO [RS-EventLoopGroup-12-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:52669, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.9 (auth:SIMPLE), service=RegionServerStatusService 2023-07-22 07:11:00,198 INFO [RS-EventLoopGroup-12-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:51117, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.7 (auth:SIMPLE), service=RegionServerStatusService 2023-07-22 07:11:00,198 INFO [RS-EventLoopGroup-12-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:54113, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.8 (auth:SIMPLE), service=RegionServerStatusService 2023-07-22 07:11:00,200 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=39207] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,44257,1690009859763 2023-07-22 07:11:00,200 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,39207,1690009859236] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-22 07:11:00,201 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,39207,1690009859236] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 1 2023-07-22 07:11:00,201 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=39207] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,38441,1690009859417 2023-07-22 07:11:00,202 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=39207] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,34227,1690009859598 2023-07-22 07:11:00,202 DEBUG [RS:2;jenkins-hbase4:44257] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:45267/user/jenkins/test-data/bf469a2d-0740-951c-0d71-e8a7443854ab 2023-07-22 07:11:00,202 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,39207,1690009859236] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-22 07:11:00,202 DEBUG [RS:2;jenkins-hbase4:44257] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:45267 2023-07-22 07:11:00,202 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,39207,1690009859236] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 3 2023-07-22 07:11:00,202 DEBUG [RS:2;jenkins-hbase4:44257] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=44693 2023-07-22 07:11:00,202 DEBUG [RS:1;jenkins-hbase4:34227] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:45267/user/jenkins/test-data/bf469a2d-0740-951c-0d71-e8a7443854ab 2023-07-22 07:11:00,202 DEBUG [RS:0;jenkins-hbase4:38441] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:45267/user/jenkins/test-data/bf469a2d-0740-951c-0d71-e8a7443854ab 2023-07-22 07:11:00,202 DEBUG [RS:1;jenkins-hbase4:34227] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:45267 2023-07-22 07:11:00,202 DEBUG [RS:0;jenkins-hbase4:38441] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:45267 2023-07-22 07:11:00,202 DEBUG [RS:0;jenkins-hbase4:38441] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=44693 2023-07-22 07:11:00,202 DEBUG [RS:1;jenkins-hbase4:34227] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=44693 2023-07-22 07:11:00,204 DEBUG [Listener at localhost/44075-EventThread] zookeeper.ZKWatcher(600): master:39207-0x1018bde77940000, quorum=127.0.0.1:58374, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-22 07:11:00,206 DEBUG [PEWorker-1] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-22 07:11:00,207 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-22 07:11:00,208 DEBUG [RS:2;jenkins-hbase4:44257] zookeeper.ZKUtil(162): regionserver:44257-0x1018bde77940003, quorum=127.0.0.1:58374, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,44257,1690009859763 2023-07-22 07:11:00,208 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,34227,1690009859598] 2023-07-22 07:11:00,208 WARN [RS:2;jenkins-hbase4:44257] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-22 07:11:00,208 DEBUG [RS:1;jenkins-hbase4:34227] zookeeper.ZKUtil(162): regionserver:34227-0x1018bde77940002, quorum=127.0.0.1:58374, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,34227,1690009859598 2023-07-22 07:11:00,208 INFO [RS:2;jenkins-hbase4:44257] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-22 07:11:00,208 WARN [RS:1;jenkins-hbase4:34227] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-22 07:11:00,208 DEBUG [RS:0;jenkins-hbase4:38441] zookeeper.ZKUtil(162): regionserver:38441-0x1018bde77940001, quorum=127.0.0.1:58374, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,38441,1690009859417 2023-07-22 07:11:00,208 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,44257,1690009859763] 2023-07-22 07:11:00,208 WARN [RS:0;jenkins-hbase4:38441] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-22 07:11:00,208 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,38441,1690009859417] 2023-07-22 07:11:00,208 INFO [RS:0;jenkins-hbase4:38441] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-22 07:11:00,208 INFO [RS:1;jenkins-hbase4:34227] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-22 07:11:00,209 DEBUG [RS:0;jenkins-hbase4:38441] regionserver.HRegionServer(1948): logDir=hdfs://localhost:45267/user/jenkins/test-data/bf469a2d-0740-951c-0d71-e8a7443854ab/WALs/jenkins-hbase4.apache.org,38441,1690009859417 2023-07-22 07:11:00,208 DEBUG [RS:2;jenkins-hbase4:44257] regionserver.HRegionServer(1948): logDir=hdfs://localhost:45267/user/jenkins/test-data/bf469a2d-0740-951c-0d71-e8a7443854ab/WALs/jenkins-hbase4.apache.org,44257,1690009859763 2023-07-22 07:11:00,209 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:45267/user/jenkins/test-data/bf469a2d-0740-951c-0d71-e8a7443854ab/data/hbase/meta/1588230740/info 2023-07-22 07:11:00,209 DEBUG [RS:1;jenkins-hbase4:34227] regionserver.HRegionServer(1948): logDir=hdfs://localhost:45267/user/jenkins/test-data/bf469a2d-0740-951c-0d71-e8a7443854ab/WALs/jenkins-hbase4.apache.org,34227,1690009859598 2023-07-22 07:11:00,209 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-22 07:11:00,211 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-22 07:11:00,213 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-22 07:11:00,220 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:45267/user/jenkins/test-data/bf469a2d-0740-951c-0d71-e8a7443854ab/data/hbase/meta/1588230740/rep_barrier 2023-07-22 07:11:00,220 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-22 07:11:00,220 DEBUG [RS:2;jenkins-hbase4:44257] zookeeper.ZKUtil(162): regionserver:44257-0x1018bde77940003, quorum=127.0.0.1:58374, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,34227,1690009859598 2023-07-22 07:11:00,220 DEBUG [RS:0;jenkins-hbase4:38441] zookeeper.ZKUtil(162): regionserver:38441-0x1018bde77940001, quorum=127.0.0.1:58374, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,34227,1690009859598 2023-07-22 07:11:00,221 DEBUG [RS:1;jenkins-hbase4:34227] zookeeper.ZKUtil(162): regionserver:34227-0x1018bde77940002, quorum=127.0.0.1:58374, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,34227,1690009859598 2023-07-22 07:11:00,221 DEBUG [RS:2;jenkins-hbase4:44257] zookeeper.ZKUtil(162): regionserver:44257-0x1018bde77940003, quorum=127.0.0.1:58374, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,44257,1690009859763 2023-07-22 07:11:00,221 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-22 07:11:00,221 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-22 07:11:00,221 DEBUG [RS:0;jenkins-hbase4:38441] zookeeper.ZKUtil(162): regionserver:38441-0x1018bde77940001, quorum=127.0.0.1:58374, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,44257,1690009859763 2023-07-22 07:11:00,221 DEBUG [RS:1;jenkins-hbase4:34227] zookeeper.ZKUtil(162): regionserver:34227-0x1018bde77940002, quorum=127.0.0.1:58374, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,44257,1690009859763 2023-07-22 07:11:00,221 DEBUG [RS:2;jenkins-hbase4:44257] zookeeper.ZKUtil(162): regionserver:44257-0x1018bde77940003, quorum=127.0.0.1:58374, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,38441,1690009859417 2023-07-22 07:11:00,222 DEBUG [RS:1;jenkins-hbase4:34227] zookeeper.ZKUtil(162): regionserver:34227-0x1018bde77940002, quorum=127.0.0.1:58374, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,38441,1690009859417 2023-07-22 07:11:00,222 DEBUG [RS:0;jenkins-hbase4:38441] zookeeper.ZKUtil(162): regionserver:38441-0x1018bde77940001, quorum=127.0.0.1:58374, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,38441,1690009859417 2023-07-22 07:11:00,223 DEBUG [RS:2;jenkins-hbase4:44257] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-22 07:11:00,223 DEBUG [RS:1;jenkins-hbase4:34227] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-22 07:11:00,223 INFO [RS:2;jenkins-hbase4:44257] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-22 07:11:00,223 INFO [RS:1;jenkins-hbase4:34227] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-22 07:11:00,224 INFO [RS:2;jenkins-hbase4:44257] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-22 07:11:00,224 DEBUG [RS:0;jenkins-hbase4:38441] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-22 07:11:00,224 INFO [RS:2;jenkins-hbase4:44257] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-22 07:11:00,225 INFO [RS:2;jenkins-hbase4:44257] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-22 07:11:00,225 INFO [RS:2;jenkins-hbase4:44257] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-22 07:11:00,226 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:45267/user/jenkins/test-data/bf469a2d-0740-951c-0d71-e8a7443854ab/data/hbase/meta/1588230740/table 2023-07-22 07:11:00,226 INFO [RS:0;jenkins-hbase4:38441] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-22 07:11:00,226 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-22 07:11:00,226 INFO [RS:1;jenkins-hbase4:34227] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-22 07:11:00,227 INFO [RS:2;jenkins-hbase4:44257] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-22 07:11:00,227 INFO [RS:1;jenkins-hbase4:34227] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-22 07:11:00,227 INFO [RS:1;jenkins-hbase4:34227] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-22 07:11:00,227 DEBUG [RS:2;jenkins-hbase4:44257] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-22 07:11:00,227 DEBUG [RS:2;jenkins-hbase4:44257] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-22 07:11:00,227 INFO [RS:1;jenkins-hbase4:34227] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-22 07:11:00,227 DEBUG [RS:2;jenkins-hbase4:44257] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-22 07:11:00,227 DEBUG [RS:2;jenkins-hbase4:44257] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-22 07:11:00,227 DEBUG [RS:2;jenkins-hbase4:44257] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-22 07:11:00,227 DEBUG [RS:2;jenkins-hbase4:44257] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-22 07:11:00,227 DEBUG [RS:2;jenkins-hbase4:44257] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-22 07:11:00,228 DEBUG [RS:2;jenkins-hbase4:44257] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-22 07:11:00,228 DEBUG [RS:2;jenkins-hbase4:44257] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-22 07:11:00,228 DEBUG [RS:2;jenkins-hbase4:44257] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-22 07:11:00,229 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-22 07:11:00,230 INFO [RS:0;jenkins-hbase4:38441] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-22 07:11:00,232 INFO [RS:2;jenkins-hbase4:44257] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-22 07:11:00,232 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:45267/user/jenkins/test-data/bf469a2d-0740-951c-0d71-e8a7443854ab/data/hbase/meta/1588230740 2023-07-22 07:11:00,232 INFO [RS:2;jenkins-hbase4:44257] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-22 07:11:00,235 INFO [RS:2;jenkins-hbase4:44257] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-22 07:11:00,236 INFO [RS:0;jenkins-hbase4:38441] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-22 07:11:00,236 INFO [RS:0;jenkins-hbase4:38441] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-22 07:11:00,236 INFO [RS:1;jenkins-hbase4:34227] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-22 07:11:00,236 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:45267/user/jenkins/test-data/bf469a2d-0740-951c-0d71-e8a7443854ab/data/hbase/meta/1588230740 2023-07-22 07:11:00,236 INFO [RS:0;jenkins-hbase4:38441] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-22 07:11:00,236 DEBUG [RS:1;jenkins-hbase4:34227] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-22 07:11:00,237 DEBUG [RS:1;jenkins-hbase4:34227] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-22 07:11:00,237 DEBUG [RS:1;jenkins-hbase4:34227] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-22 07:11:00,237 DEBUG [RS:1;jenkins-hbase4:34227] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-22 07:11:00,237 DEBUG [RS:1;jenkins-hbase4:34227] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-22 07:11:00,237 DEBUG [RS:1;jenkins-hbase4:34227] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-22 07:11:00,237 DEBUG [RS:1;jenkins-hbase4:34227] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-22 07:11:00,237 DEBUG [RS:1;jenkins-hbase4:34227] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-22 07:11:00,237 DEBUG [RS:1;jenkins-hbase4:34227] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-22 07:11:00,237 DEBUG [RS:1;jenkins-hbase4:34227] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-22 07:11:00,237 INFO [RS:0;jenkins-hbase4:38441] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-22 07:11:00,237 DEBUG [RS:0;jenkins-hbase4:38441] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-22 07:11:00,238 DEBUG [RS:0;jenkins-hbase4:38441] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-22 07:11:00,238 DEBUG [RS:0;jenkins-hbase4:38441] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-22 07:11:00,238 DEBUG [RS:0;jenkins-hbase4:38441] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-22 07:11:00,238 DEBUG [RS:0;jenkins-hbase4:38441] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-22 07:11:00,238 DEBUG [RS:0;jenkins-hbase4:38441] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-22 07:11:00,238 DEBUG [RS:0;jenkins-hbase4:38441] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-22 07:11:00,238 DEBUG [RS:0;jenkins-hbase4:38441] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-22 07:11:00,238 DEBUG [RS:0;jenkins-hbase4:38441] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-22 07:11:00,238 DEBUG [RS:0;jenkins-hbase4:38441] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-22 07:11:00,243 INFO [RS:1;jenkins-hbase4:34227] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-22 07:11:00,243 INFO [RS:1;jenkins-hbase4:34227] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-22 07:11:00,243 INFO [RS:1;jenkins-hbase4:34227] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-22 07:11:00,244 INFO [RS:0;jenkins-hbase4:38441] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-22 07:11:00,244 INFO [RS:0;jenkins-hbase4:38441] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-22 07:11:00,245 DEBUG [PEWorker-1] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-22 07:11:00,250 INFO [RS:0;jenkins-hbase4:38441] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-22 07:11:00,252 DEBUG [PEWorker-1] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-22 07:11:00,255 DEBUG [PEWorker-1] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:45267/user/jenkins/test-data/bf469a2d-0740-951c-0d71-e8a7443854ab/data/hbase/meta/1588230740/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-22 07:11:00,255 INFO [PEWorker-1] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9799695840, jitterRate=-0.08733220398426056}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-22 07:11:00,256 DEBUG [PEWorker-1] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-22 07:11:00,256 DEBUG [PEWorker-1] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-22 07:11:00,256 INFO [PEWorker-1] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-22 07:11:00,256 DEBUG [PEWorker-1] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-22 07:11:00,256 DEBUG [PEWorker-1] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-22 07:11:00,256 DEBUG [PEWorker-1] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-22 07:11:00,256 INFO [PEWorker-1] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-22 07:11:00,256 DEBUG [PEWorker-1] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-22 07:11:00,256 INFO [RS:1;jenkins-hbase4:34227] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-22 07:11:00,257 INFO [RS:1;jenkins-hbase4:34227] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,34227,1690009859598-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-22 07:11:00,257 INFO [RS:2;jenkins-hbase4:44257] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-22 07:11:00,258 INFO [RS:2;jenkins-hbase4:44257] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,44257,1690009859763-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-22 07:11:00,259 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_ASSIGN_META, locked=true; InitMetaProcedure table=hbase:meta 2023-07-22 07:11:00,259 INFO [PEWorker-1] procedure.InitMetaProcedure(103): Going to assign meta 2023-07-22 07:11:00,259 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN}] 2023-07-22 07:11:00,260 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN 2023-07-22 07:11:00,261 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN; state=OFFLINE, location=null; forceNewPlan=false, retain=false 2023-07-22 07:11:00,265 INFO [RS:0;jenkins-hbase4:38441] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-22 07:11:00,265 INFO [RS:0;jenkins-hbase4:38441] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,38441,1690009859417-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-22 07:11:00,275 INFO [RS:1;jenkins-hbase4:34227] regionserver.Replication(203): jenkins-hbase4.apache.org,34227,1690009859598 started 2023-07-22 07:11:00,275 INFO [RS:1;jenkins-hbase4:34227] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,34227,1690009859598, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:34227, sessionid=0x1018bde77940002 2023-07-22 07:11:00,275 INFO [RS:2;jenkins-hbase4:44257] regionserver.Replication(203): jenkins-hbase4.apache.org,44257,1690009859763 started 2023-07-22 07:11:00,277 DEBUG [RS:1;jenkins-hbase4:34227] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-22 07:11:00,277 DEBUG [RS:1;jenkins-hbase4:34227] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,34227,1690009859598 2023-07-22 07:11:00,277 DEBUG [RS:1;jenkins-hbase4:34227] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,34227,1690009859598' 2023-07-22 07:11:00,277 DEBUG [RS:1;jenkins-hbase4:34227] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-22 07:11:00,277 INFO [RS:2;jenkins-hbase4:44257] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,44257,1690009859763, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:44257, sessionid=0x1018bde77940003 2023-07-22 07:11:00,277 DEBUG [RS:2;jenkins-hbase4:44257] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-22 07:11:00,277 DEBUG [RS:2;jenkins-hbase4:44257] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,44257,1690009859763 2023-07-22 07:11:00,277 DEBUG [RS:2;jenkins-hbase4:44257] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,44257,1690009859763' 2023-07-22 07:11:00,277 DEBUG [RS:2;jenkins-hbase4:44257] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-22 07:11:00,277 DEBUG [RS:1;jenkins-hbase4:34227] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-22 07:11:00,278 DEBUG [RS:2;jenkins-hbase4:44257] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-22 07:11:00,278 DEBUG [RS:1;jenkins-hbase4:34227] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-22 07:11:00,278 DEBUG [RS:1;jenkins-hbase4:34227] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-22 07:11:00,278 DEBUG [RS:2;jenkins-hbase4:44257] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-22 07:11:00,278 DEBUG [RS:1;jenkins-hbase4:34227] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,34227,1690009859598 2023-07-22 07:11:00,278 DEBUG [RS:2;jenkins-hbase4:44257] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-22 07:11:00,278 DEBUG [RS:2;jenkins-hbase4:44257] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,44257,1690009859763 2023-07-22 07:11:00,278 DEBUG [RS:2;jenkins-hbase4:44257] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,44257,1690009859763' 2023-07-22 07:11:00,278 DEBUG [RS:1;jenkins-hbase4:34227] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,34227,1690009859598' 2023-07-22 07:11:00,278 DEBUG [RS:1;jenkins-hbase4:34227] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-22 07:11:00,278 DEBUG [RS:2;jenkins-hbase4:44257] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-22 07:11:00,278 DEBUG [RS:1;jenkins-hbase4:34227] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-22 07:11:00,278 DEBUG [RS:2;jenkins-hbase4:44257] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-22 07:11:00,279 DEBUG [RS:1;jenkins-hbase4:34227] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-22 07:11:00,279 INFO [RS:1;jenkins-hbase4:34227] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-22 07:11:00,279 INFO [RS:1;jenkins-hbase4:34227] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-22 07:11:00,279 DEBUG [RS:2;jenkins-hbase4:44257] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-22 07:11:00,279 INFO [RS:2;jenkins-hbase4:44257] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-22 07:11:00,279 INFO [RS:2;jenkins-hbase4:44257] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-22 07:11:00,281 INFO [RS:0;jenkins-hbase4:38441] regionserver.Replication(203): jenkins-hbase4.apache.org,38441,1690009859417 started 2023-07-22 07:11:00,281 INFO [RS:0;jenkins-hbase4:38441] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,38441,1690009859417, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:38441, sessionid=0x1018bde77940001 2023-07-22 07:11:00,281 DEBUG [RS:0;jenkins-hbase4:38441] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-22 07:11:00,281 DEBUG [RS:0;jenkins-hbase4:38441] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,38441,1690009859417 2023-07-22 07:11:00,281 DEBUG [RS:0;jenkins-hbase4:38441] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,38441,1690009859417' 2023-07-22 07:11:00,281 DEBUG [RS:0;jenkins-hbase4:38441] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-22 07:11:00,281 DEBUG [RS:0;jenkins-hbase4:38441] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-22 07:11:00,281 DEBUG [RS:0;jenkins-hbase4:38441] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-22 07:11:00,281 DEBUG [RS:0;jenkins-hbase4:38441] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-22 07:11:00,281 DEBUG [RS:0;jenkins-hbase4:38441] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,38441,1690009859417 2023-07-22 07:11:00,281 DEBUG [RS:0;jenkins-hbase4:38441] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,38441,1690009859417' 2023-07-22 07:11:00,281 DEBUG [RS:0;jenkins-hbase4:38441] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-22 07:11:00,282 DEBUG [RS:0;jenkins-hbase4:38441] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-22 07:11:00,282 DEBUG [RS:0;jenkins-hbase4:38441] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-22 07:11:00,282 INFO [RS:0;jenkins-hbase4:38441] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-22 07:11:00,282 INFO [RS:0;jenkins-hbase4:38441] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-22 07:11:00,300 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-datanode.properties,hadoop-metrics2.properties 2023-07-22 07:11:00,381 INFO [RS:1;jenkins-hbase4:34227] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C34227%2C1690009859598, suffix=, logDir=hdfs://localhost:45267/user/jenkins/test-data/bf469a2d-0740-951c-0d71-e8a7443854ab/WALs/jenkins-hbase4.apache.org,34227,1690009859598, archiveDir=hdfs://localhost:45267/user/jenkins/test-data/bf469a2d-0740-951c-0d71-e8a7443854ab/oldWALs, maxLogs=32 2023-07-22 07:11:00,381 INFO [RS:2;jenkins-hbase4:44257] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C44257%2C1690009859763, suffix=, logDir=hdfs://localhost:45267/user/jenkins/test-data/bf469a2d-0740-951c-0d71-e8a7443854ab/WALs/jenkins-hbase4.apache.org,44257,1690009859763, archiveDir=hdfs://localhost:45267/user/jenkins/test-data/bf469a2d-0740-951c-0d71-e8a7443854ab/oldWALs, maxLogs=32 2023-07-22 07:11:00,384 INFO [RS:0;jenkins-hbase4:38441] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C38441%2C1690009859417, suffix=, logDir=hdfs://localhost:45267/user/jenkins/test-data/bf469a2d-0740-951c-0d71-e8a7443854ab/WALs/jenkins-hbase4.apache.org,38441,1690009859417, archiveDir=hdfs://localhost:45267/user/jenkins/test-data/bf469a2d-0740-951c-0d71-e8a7443854ab/oldWALs, maxLogs=32 2023-07-22 07:11:00,404 DEBUG [RS-EventLoopGroup-15-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:46471,DS-54862990-c4ed-4f54-b35e-aa588a06880e,DISK] 2023-07-22 07:11:00,404 DEBUG [RS-EventLoopGroup-15-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:40325,DS-569084ad-ab2f-4422-bbd3-3b09e4616e87,DISK] 2023-07-22 07:11:00,404 DEBUG [RS-EventLoopGroup-15-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:43305,DS-eba9c6db-5917-47d2-ac38-49a4525b7417,DISK] 2023-07-22 07:11:00,409 DEBUG [RS-EventLoopGroup-15-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:46471,DS-54862990-c4ed-4f54-b35e-aa588a06880e,DISK] 2023-07-22 07:11:00,409 DEBUG [RS-EventLoopGroup-15-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:40325,DS-569084ad-ab2f-4422-bbd3-3b09e4616e87,DISK] 2023-07-22 07:11:00,409 DEBUG [RS-EventLoopGroup-15-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:43305,DS-eba9c6db-5917-47d2-ac38-49a4525b7417,DISK] 2023-07-22 07:11:00,411 DEBUG [jenkins-hbase4:39207] assignment.AssignmentManager(2176): Processing assignQueue; systemServersCount=3, allServersCount=3 2023-07-22 07:11:00,412 DEBUG [jenkins-hbase4:39207] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-22 07:11:00,412 DEBUG [jenkins-hbase4:39207] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-22 07:11:00,412 DEBUG [jenkins-hbase4:39207] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-22 07:11:00,412 DEBUG [jenkins-hbase4:39207] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-22 07:11:00,412 DEBUG [jenkins-hbase4:39207] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-22 07:11:00,416 INFO [PEWorker-3] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,44257,1690009859763, state=OPENING 2023-07-22 07:11:00,416 INFO [RS:1;jenkins-hbase4:34227] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/bf469a2d-0740-951c-0d71-e8a7443854ab/WALs/jenkins-hbase4.apache.org,34227,1690009859598/jenkins-hbase4.apache.org%2C34227%2C1690009859598.1690009860381 2023-07-22 07:11:00,416 DEBUG [RS:1;jenkins-hbase4:34227] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:46471,DS-54862990-c4ed-4f54-b35e-aa588a06880e,DISK], DatanodeInfoWithStorage[127.0.0.1:40325,DS-569084ad-ab2f-4422-bbd3-3b09e4616e87,DISK], DatanodeInfoWithStorage[127.0.0.1:43305,DS-eba9c6db-5917-47d2-ac38-49a4525b7417,DISK]] 2023-07-22 07:11:00,417 DEBUG [PEWorker-3] zookeeper.MetaTableLocator(240): hbase:meta region location doesn't exist, create it 2023-07-22 07:11:00,418 DEBUG [Listener at localhost/44075-EventThread] zookeeper.ZKWatcher(600): master:39207-0x1018bde77940000, quorum=127.0.0.1:58374, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-22 07:11:00,422 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-22 07:11:00,422 DEBUG [RS-EventLoopGroup-15-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:43305,DS-eba9c6db-5917-47d2-ac38-49a4525b7417,DISK] 2023-07-22 07:11:00,422 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=3, ppid=2, state=RUNNABLE; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,44257,1690009859763}] 2023-07-22 07:11:00,422 DEBUG [RS-EventLoopGroup-15-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:40325,DS-569084ad-ab2f-4422-bbd3-3b09e4616e87,DISK] 2023-07-22 07:11:00,422 DEBUG [RS-EventLoopGroup-15-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:46471,DS-54862990-c4ed-4f54-b35e-aa588a06880e,DISK] 2023-07-22 07:11:00,422 INFO [RS:2;jenkins-hbase4:44257] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/bf469a2d-0740-951c-0d71-e8a7443854ab/WALs/jenkins-hbase4.apache.org,44257,1690009859763/jenkins-hbase4.apache.org%2C44257%2C1690009859763.1690009860381 2023-07-22 07:11:00,423 DEBUG [RS:2;jenkins-hbase4:44257] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:46471,DS-54862990-c4ed-4f54-b35e-aa588a06880e,DISK], DatanodeInfoWithStorage[127.0.0.1:40325,DS-569084ad-ab2f-4422-bbd3-3b09e4616e87,DISK], DatanodeInfoWithStorage[127.0.0.1:43305,DS-eba9c6db-5917-47d2-ac38-49a4525b7417,DISK]] 2023-07-22 07:11:00,428 INFO [RS:0;jenkins-hbase4:38441] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/bf469a2d-0740-951c-0d71-e8a7443854ab/WALs/jenkins-hbase4.apache.org,38441,1690009859417/jenkins-hbase4.apache.org%2C38441%2C1690009859417.1690009860384 2023-07-22 07:11:00,430 DEBUG [RS:0;jenkins-hbase4:38441] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:43305,DS-eba9c6db-5917-47d2-ac38-49a4525b7417,DISK], DatanodeInfoWithStorage[127.0.0.1:40325,DS-569084ad-ab2f-4422-bbd3-3b09e4616e87,DISK], DatanodeInfoWithStorage[127.0.0.1:46471,DS-54862990-c4ed-4f54-b35e-aa588a06880e,DISK]] 2023-07-22 07:11:00,580 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,44257,1690009859763 2023-07-22 07:11:00,580 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-22 07:11:00,581 INFO [RS-EventLoopGroup-15-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:58094, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-22 07:11:00,588 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:meta,,1.1588230740 2023-07-22 07:11:00,588 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-22 07:11:00,590 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C44257%2C1690009859763.meta, suffix=.meta, logDir=hdfs://localhost:45267/user/jenkins/test-data/bf469a2d-0740-951c-0d71-e8a7443854ab/WALs/jenkins-hbase4.apache.org,44257,1690009859763, archiveDir=hdfs://localhost:45267/user/jenkins/test-data/bf469a2d-0740-951c-0d71-e8a7443854ab/oldWALs, maxLogs=32 2023-07-22 07:11:00,604 DEBUG [RS-EventLoopGroup-15-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:40325,DS-569084ad-ab2f-4422-bbd3-3b09e4616e87,DISK] 2023-07-22 07:11:00,604 DEBUG [RS-EventLoopGroup-15-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:46471,DS-54862990-c4ed-4f54-b35e-aa588a06880e,DISK] 2023-07-22 07:11:00,604 DEBUG [RS-EventLoopGroup-15-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:43305,DS-eba9c6db-5917-47d2-ac38-49a4525b7417,DISK] 2023-07-22 07:11:00,607 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/bf469a2d-0740-951c-0d71-e8a7443854ab/WALs/jenkins-hbase4.apache.org,44257,1690009859763/jenkins-hbase4.apache.org%2C44257%2C1690009859763.meta.1690009860590.meta 2023-07-22 07:11:00,607 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:40325,DS-569084ad-ab2f-4422-bbd3-3b09e4616e87,DISK], DatanodeInfoWithStorage[127.0.0.1:46471,DS-54862990-c4ed-4f54-b35e-aa588a06880e,DISK], DatanodeInfoWithStorage[127.0.0.1:43305,DS-eba9c6db-5917-47d2-ac38-49a4525b7417,DISK]] 2023-07-22 07:11:00,607 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2023-07-22 07:11:00,608 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-22 07:11:00,608 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService 2023-07-22 07:11:00,608 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2023-07-22 07:11:00,608 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table meta 1588230740 2023-07-22 07:11:00,608 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-22 07:11:00,608 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 1588230740 2023-07-22 07:11:00,608 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 1588230740 2023-07-22 07:11:00,611 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-22 07:11:00,612 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:45267/user/jenkins/test-data/bf469a2d-0740-951c-0d71-e8a7443854ab/data/hbase/meta/1588230740/info 2023-07-22 07:11:00,612 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:45267/user/jenkins/test-data/bf469a2d-0740-951c-0d71-e8a7443854ab/data/hbase/meta/1588230740/info 2023-07-22 07:11:00,612 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-22 07:11:00,612 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-22 07:11:00,613 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-22 07:11:00,613 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:45267/user/jenkins/test-data/bf469a2d-0740-951c-0d71-e8a7443854ab/data/hbase/meta/1588230740/rep_barrier 2023-07-22 07:11:00,613 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:45267/user/jenkins/test-data/bf469a2d-0740-951c-0d71-e8a7443854ab/data/hbase/meta/1588230740/rep_barrier 2023-07-22 07:11:00,614 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-22 07:11:00,614 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-22 07:11:00,614 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-22 07:11:00,615 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:45267/user/jenkins/test-data/bf469a2d-0740-951c-0d71-e8a7443854ab/data/hbase/meta/1588230740/table 2023-07-22 07:11:00,615 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:45267/user/jenkins/test-data/bf469a2d-0740-951c-0d71-e8a7443854ab/data/hbase/meta/1588230740/table 2023-07-22 07:11:00,616 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-22 07:11:00,616 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-22 07:11:00,617 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:45267/user/jenkins/test-data/bf469a2d-0740-951c-0d71-e8a7443854ab/data/hbase/meta/1588230740 2023-07-22 07:11:00,618 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:45267/user/jenkins/test-data/bf469a2d-0740-951c-0d71-e8a7443854ab/data/hbase/meta/1588230740 2023-07-22 07:11:00,620 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-22 07:11:00,622 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-22 07:11:00,622 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11036597920, jitterRate=0.0278632789850235}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-22 07:11:00,622 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-22 07:11:00,623 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:meta,,1.1588230740, pid=3, masterSystemTime=1690009860580 2023-07-22 07:11:00,629 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:meta,,1.1588230740 2023-07-22 07:11:00,630 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:meta,,1.1588230740 2023-07-22 07:11:00,630 INFO [PEWorker-5] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,44257,1690009859763, state=OPEN 2023-07-22 07:11:00,632 DEBUG [Listener at localhost/44075-EventThread] zookeeper.ZKWatcher(600): master:39207-0x1018bde77940000, quorum=127.0.0.1:58374, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-07-22 07:11:00,632 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-22 07:11:00,634 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=3, resume processing ppid=2 2023-07-22 07:11:00,634 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=3, ppid=2, state=SUCCESS; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,44257,1690009859763 in 210 msec 2023-07-22 07:11:00,635 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=2, resume processing ppid=1 2023-07-22 07:11:00,635 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=2, ppid=1, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN in 375 msec 2023-07-22 07:11:00,638 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=1, state=SUCCESS; InitMetaProcedure table=hbase:meta in 563 msec 2023-07-22 07:11:00,638 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(953): Wait for region servers to report in: status=null, state=RUNNING, startTime=1690009860638, completionTime=-1 2023-07-22 07:11:00,639 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ServerManager(821): Finished waiting on RegionServer count=3; waited=0ms, expected min=3 server(s), max=3 server(s), master is running 2023-07-22 07:11:00,639 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1517): Joining cluster... 2023-07-22 07:11:00,641 DEBUG [hconnection-0x3e956335-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-22 07:11:00,643 INFO [RS-EventLoopGroup-15-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:58100, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-22 07:11:00,644 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1529): Number of RegionServers=3 2023-07-22 07:11:00,644 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$RegionInTransitionChore; timeout=60000, timestamp=1690009920644 2023-07-22 07:11:00,645 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$DeadServerMetricRegionChore; timeout=120000, timestamp=1690009980645 2023-07-22 07:11:00,645 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1536): Joined the cluster in 5 msec 2023-07-22 07:11:00,651 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,39207,1690009859236-ClusterStatusChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-22 07:11:00,651 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,39207,1690009859236-BalancerChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-22 07:11:00,651 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,39207,1690009859236-RegionNormalizerChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-22 07:11:00,651 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=CatalogJanitor-jenkins-hbase4:39207, period=300000, unit=MILLISECONDS is enabled. 2023-07-22 07:11:00,651 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HbckChore-, period=3600000, unit=MILLISECONDS is enabled. 2023-07-22 07:11:00,651 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.TableNamespaceManager(92): Namespace table not found. Creating... 2023-07-22 07:11:00,651 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2148): Client=null/null create 'hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-07-22 07:11:00,652 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:namespace 2023-07-22 07:11:00,653 DEBUG [master/jenkins-hbase4:0.Chore.1] janitor.CatalogJanitor(175): 2023-07-22 07:11:00,658 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_PRE_OPERATION 2023-07-22 07:11:00,658 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-22 07:11:00,660 DEBUG [HFileArchiver-8] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:45267/user/jenkins/test-data/bf469a2d-0740-951c-0d71-e8a7443854ab/.tmp/data/hbase/namespace/7d65c4912cc14781701102c94fe87b18 2023-07-22 07:11:00,660 DEBUG [HFileArchiver-8] backup.HFileArchiver(153): Directory hdfs://localhost:45267/user/jenkins/test-data/bf469a2d-0740-951c-0d71-e8a7443854ab/.tmp/data/hbase/namespace/7d65c4912cc14781701102c94fe87b18 empty. 2023-07-22 07:11:00,661 DEBUG [HFileArchiver-8] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:45267/user/jenkins/test-data/bf469a2d-0740-951c-0d71-e8a7443854ab/.tmp/data/hbase/namespace/7d65c4912cc14781701102c94fe87b18 2023-07-22 07:11:00,661 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(328): Archived hbase:namespace regions 2023-07-22 07:11:00,698 DEBUG [PEWorker-4] util.FSTableDescriptors(570): Wrote into hdfs://localhost:45267/user/jenkins/test-data/bf469a2d-0740-951c-0d71-e8a7443854ab/.tmp/data/hbase/namespace/.tabledesc/.tableinfo.0000000001 2023-07-22 07:11:00,703 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(7675): creating {ENCODED => 7d65c4912cc14781701102c94fe87b18, NAME => 'hbase:namespace,,1690009860651.7d65c4912cc14781701102c94fe87b18.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:45267/user/jenkins/test-data/bf469a2d-0740-951c-0d71-e8a7443854ab/.tmp 2023-07-22 07:11:00,709 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,39207,1690009859236] master.HMaster(2148): Client=null/null create 'hbase:rsgroup', {TABLE_ATTRIBUTES => {coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|', METADATA => {'SPLIT_POLICY' => 'org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy'}}}, {NAME => 'm', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-22 07:11:00,711 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,39207,1690009859236] procedure2.ProcedureExecutor(1029): Stored pid=5, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:rsgroup 2023-07-22 07:11:00,714 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_PRE_OPERATION 2023-07-22 07:11:00,718 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-22 07:11:00,720 DEBUG [HFileArchiver-7] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:45267/user/jenkins/test-data/bf469a2d-0740-951c-0d71-e8a7443854ab/.tmp/data/hbase/rsgroup/b245454cd52dc02c77d146e37c1c439c 2023-07-22 07:11:00,721 DEBUG [HFileArchiver-7] backup.HFileArchiver(153): Directory hdfs://localhost:45267/user/jenkins/test-data/bf469a2d-0740-951c-0d71-e8a7443854ab/.tmp/data/hbase/rsgroup/b245454cd52dc02c77d146e37c1c439c empty. 2023-07-22 07:11:00,721 DEBUG [HFileArchiver-7] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:45267/user/jenkins/test-data/bf469a2d-0740-951c-0d71-e8a7443854ab/.tmp/data/hbase/rsgroup/b245454cd52dc02c77d146e37c1c439c 2023-07-22 07:11:00,721 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived hbase:rsgroup regions 2023-07-22 07:11:00,747 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1690009860651.7d65c4912cc14781701102c94fe87b18.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-22 07:11:00,747 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1604): Closing 7d65c4912cc14781701102c94fe87b18, disabling compactions & flushes 2023-07-22 07:11:00,747 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1690009860651.7d65c4912cc14781701102c94fe87b18. 2023-07-22 07:11:00,747 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1690009860651.7d65c4912cc14781701102c94fe87b18. 2023-07-22 07:11:00,748 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1690009860651.7d65c4912cc14781701102c94fe87b18. after waiting 0 ms 2023-07-22 07:11:00,748 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1690009860651.7d65c4912cc14781701102c94fe87b18. 2023-07-22 07:11:00,748 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1838): Closed hbase:namespace,,1690009860651.7d65c4912cc14781701102c94fe87b18. 2023-07-22 07:11:00,748 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1558): Region close journal for 7d65c4912cc14781701102c94fe87b18: 2023-07-22 07:11:00,751 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ADD_TO_META 2023-07-22 07:11:00,752 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:namespace,,1690009860651.7d65c4912cc14781701102c94fe87b18.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1690009860752"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690009860752"}]},"ts":"1690009860752"} 2023-07-22 07:11:00,769 INFO [PEWorker-4] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-22 07:11:00,773 DEBUG [PEWorker-3] util.FSTableDescriptors(570): Wrote into hdfs://localhost:45267/user/jenkins/test-data/bf469a2d-0740-951c-0d71-e8a7443854ab/.tmp/data/hbase/rsgroup/.tabledesc/.tableinfo.0000000001 2023-07-22 07:11:00,773 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-22 07:11:00,774 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690009860773"}]},"ts":"1690009860773"} 2023-07-22 07:11:00,774 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(7675): creating {ENCODED => b245454cd52dc02c77d146e37c1c439c, NAME => 'hbase:rsgroup,,1690009860709.b245454cd52dc02c77d146e37c1c439c.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:rsgroup', {TABLE_ATTRIBUTES => {coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|', METADATA => {'SPLIT_POLICY' => 'org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy'}}}, {NAME => 'm', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:45267/user/jenkins/test-data/bf469a2d-0740-951c-0d71-e8a7443854ab/.tmp 2023-07-22 07:11:00,775 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLING in hbase:meta 2023-07-22 07:11:00,786 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-22 07:11:00,786 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-22 07:11:00,787 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-22 07:11:00,787 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-22 07:11:00,787 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-22 07:11:00,787 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=6, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=7d65c4912cc14781701102c94fe87b18, ASSIGN}] 2023-07-22 07:11:00,788 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=6, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=7d65c4912cc14781701102c94fe87b18, ASSIGN 2023-07-22 07:11:00,789 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=6, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:namespace, region=7d65c4912cc14781701102c94fe87b18, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,38441,1690009859417; forceNewPlan=false, retain=false 2023-07-22 07:11:00,798 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1690009860709.b245454cd52dc02c77d146e37c1c439c.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-22 07:11:00,798 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1604): Closing b245454cd52dc02c77d146e37c1c439c, disabling compactions & flushes 2023-07-22 07:11:00,798 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1690009860709.b245454cd52dc02c77d146e37c1c439c. 2023-07-22 07:11:00,798 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1690009860709.b245454cd52dc02c77d146e37c1c439c. 2023-07-22 07:11:00,798 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1690009860709.b245454cd52dc02c77d146e37c1c439c. after waiting 0 ms 2023-07-22 07:11:00,798 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1690009860709.b245454cd52dc02c77d146e37c1c439c. 2023-07-22 07:11:00,798 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1690009860709.b245454cd52dc02c77d146e37c1c439c. 2023-07-22 07:11:00,798 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1558): Region close journal for b245454cd52dc02c77d146e37c1c439c: 2023-07-22 07:11:00,801 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_ADD_TO_META 2023-07-22 07:11:00,802 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:rsgroup,,1690009860709.b245454cd52dc02c77d146e37c1c439c.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1690009860801"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690009860801"}]},"ts":"1690009860801"} 2023-07-22 07:11:00,803 INFO [PEWorker-3] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-22 07:11:00,804 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-22 07:11:00,804 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:rsgroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690009860804"}]},"ts":"1690009860804"} 2023-07-22 07:11:00,805 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=hbase:rsgroup, state=ENABLING in hbase:meta 2023-07-22 07:11:00,809 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-22 07:11:00,809 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-22 07:11:00,809 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-22 07:11:00,809 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-22 07:11:00,809 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-22 07:11:00,810 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=7, ppid=5, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:rsgroup, region=b245454cd52dc02c77d146e37c1c439c, ASSIGN}] 2023-07-22 07:11:00,812 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=7, ppid=5, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:rsgroup, region=b245454cd52dc02c77d146e37c1c439c, ASSIGN 2023-07-22 07:11:00,812 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=7, ppid=5, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:rsgroup, region=b245454cd52dc02c77d146e37c1c439c, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,44257,1690009859763; forceNewPlan=false, retain=false 2023-07-22 07:11:00,812 INFO [jenkins-hbase4:39207] balancer.BaseLoadBalancer(1545): Reassigned 2 regions. 2 retained the pre-restart assignment. 2023-07-22 07:11:00,815 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=6 updating hbase:meta row=7d65c4912cc14781701102c94fe87b18, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,38441,1690009859417 2023-07-22 07:11:00,815 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1690009860651.7d65c4912cc14781701102c94fe87b18.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1690009860815"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690009860815"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690009860815"}]},"ts":"1690009860815"} 2023-07-22 07:11:00,815 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=7 updating hbase:meta row=b245454cd52dc02c77d146e37c1c439c, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,44257,1690009859763 2023-07-22 07:11:00,815 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:rsgroup,,1690009860709.b245454cd52dc02c77d146e37c1c439c.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1690009860815"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690009860815"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690009860815"}]},"ts":"1690009860815"} 2023-07-22 07:11:00,816 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=8, ppid=6, state=RUNNABLE; OpenRegionProcedure 7d65c4912cc14781701102c94fe87b18, server=jenkins-hbase4.apache.org,38441,1690009859417}] 2023-07-22 07:11:00,817 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=9, ppid=7, state=RUNNABLE; OpenRegionProcedure b245454cd52dc02c77d146e37c1c439c, server=jenkins-hbase4.apache.org,44257,1690009859763}] 2023-07-22 07:11:00,969 DEBUG [RSProcedureDispatcher-pool-1] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,38441,1690009859417 2023-07-22 07:11:00,969 DEBUG [RSProcedureDispatcher-pool-1] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-22 07:11:00,971 INFO [RS-EventLoopGroup-13-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:47590, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-22 07:11:00,975 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:rsgroup,,1690009860709.b245454cd52dc02c77d146e37c1c439c. 2023-07-22 07:11:00,975 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:namespace,,1690009860651.7d65c4912cc14781701102c94fe87b18. 2023-07-22 07:11:00,975 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => b245454cd52dc02c77d146e37c1c439c, NAME => 'hbase:rsgroup,,1690009860709.b245454cd52dc02c77d146e37c1c439c.', STARTKEY => '', ENDKEY => ''} 2023-07-22 07:11:00,975 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 7d65c4912cc14781701102c94fe87b18, NAME => 'hbase:namespace,,1690009860651.7d65c4912cc14781701102c94fe87b18.', STARTKEY => '', ENDKEY => ''} 2023-07-22 07:11:00,975 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-22 07:11:00,975 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table namespace 7d65c4912cc14781701102c94fe87b18 2023-07-22 07:11:00,975 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:rsgroup,,1690009860709.b245454cd52dc02c77d146e37c1c439c. service=MultiRowMutationService 2023-07-22 07:11:00,976 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1690009860651.7d65c4912cc14781701102c94fe87b18.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-22 07:11:00,976 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:rsgroup successfully. 2023-07-22 07:11:00,976 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 7d65c4912cc14781701102c94fe87b18 2023-07-22 07:11:00,976 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 7d65c4912cc14781701102c94fe87b18 2023-07-22 07:11:00,976 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table rsgroup b245454cd52dc02c77d146e37c1c439c 2023-07-22 07:11:00,976 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1690009860709.b245454cd52dc02c77d146e37c1c439c.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-22 07:11:00,976 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for b245454cd52dc02c77d146e37c1c439c 2023-07-22 07:11:00,976 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for b245454cd52dc02c77d146e37c1c439c 2023-07-22 07:11:00,977 INFO [StoreOpener-7d65c4912cc14781701102c94fe87b18-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 7d65c4912cc14781701102c94fe87b18 2023-07-22 07:11:00,982 INFO [StoreOpener-b245454cd52dc02c77d146e37c1c439c-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family m of region b245454cd52dc02c77d146e37c1c439c 2023-07-22 07:11:00,983 DEBUG [StoreOpener-7d65c4912cc14781701102c94fe87b18-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:45267/user/jenkins/test-data/bf469a2d-0740-951c-0d71-e8a7443854ab/data/hbase/namespace/7d65c4912cc14781701102c94fe87b18/info 2023-07-22 07:11:00,983 DEBUG [StoreOpener-7d65c4912cc14781701102c94fe87b18-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:45267/user/jenkins/test-data/bf469a2d-0740-951c-0d71-e8a7443854ab/data/hbase/namespace/7d65c4912cc14781701102c94fe87b18/info 2023-07-22 07:11:00,983 DEBUG [StoreOpener-b245454cd52dc02c77d146e37c1c439c-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:45267/user/jenkins/test-data/bf469a2d-0740-951c-0d71-e8a7443854ab/data/hbase/rsgroup/b245454cd52dc02c77d146e37c1c439c/m 2023-07-22 07:11:00,983 DEBUG [StoreOpener-b245454cd52dc02c77d146e37c1c439c-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:45267/user/jenkins/test-data/bf469a2d-0740-951c-0d71-e8a7443854ab/data/hbase/rsgroup/b245454cd52dc02c77d146e37c1c439c/m 2023-07-22 07:11:00,983 INFO [StoreOpener-7d65c4912cc14781701102c94fe87b18-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 7d65c4912cc14781701102c94fe87b18 columnFamilyName info 2023-07-22 07:11:00,984 INFO [StoreOpener-b245454cd52dc02c77d146e37c1c439c-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region b245454cd52dc02c77d146e37c1c439c columnFamilyName m 2023-07-22 07:11:00,984 INFO [StoreOpener-7d65c4912cc14781701102c94fe87b18-1] regionserver.HStore(310): Store=7d65c4912cc14781701102c94fe87b18/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-22 07:11:00,984 INFO [StoreOpener-b245454cd52dc02c77d146e37c1c439c-1] regionserver.HStore(310): Store=b245454cd52dc02c77d146e37c1c439c/m, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-22 07:11:00,985 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:45267/user/jenkins/test-data/bf469a2d-0740-951c-0d71-e8a7443854ab/data/hbase/namespace/7d65c4912cc14781701102c94fe87b18 2023-07-22 07:11:00,985 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:45267/user/jenkins/test-data/bf469a2d-0740-951c-0d71-e8a7443854ab/data/hbase/namespace/7d65c4912cc14781701102c94fe87b18 2023-07-22 07:11:00,985 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:45267/user/jenkins/test-data/bf469a2d-0740-951c-0d71-e8a7443854ab/data/hbase/rsgroup/b245454cd52dc02c77d146e37c1c439c 2023-07-22 07:11:00,986 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:45267/user/jenkins/test-data/bf469a2d-0740-951c-0d71-e8a7443854ab/data/hbase/rsgroup/b245454cd52dc02c77d146e37c1c439c 2023-07-22 07:11:00,988 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 7d65c4912cc14781701102c94fe87b18 2023-07-22 07:11:00,989 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for b245454cd52dc02c77d146e37c1c439c 2023-07-22 07:11:00,991 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:45267/user/jenkins/test-data/bf469a2d-0740-951c-0d71-e8a7443854ab/data/hbase/namespace/7d65c4912cc14781701102c94fe87b18/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-22 07:11:00,991 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:45267/user/jenkins/test-data/bf469a2d-0740-951c-0d71-e8a7443854ab/data/hbase/rsgroup/b245454cd52dc02c77d146e37c1c439c/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-22 07:11:00,991 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 7d65c4912cc14781701102c94fe87b18; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11453742880, jitterRate=0.06671293079853058}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-22 07:11:00,992 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened b245454cd52dc02c77d146e37c1c439c; next sequenceid=2; org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy@6edd3587, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-22 07:11:00,992 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 7d65c4912cc14781701102c94fe87b18: 2023-07-22 07:11:00,992 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for b245454cd52dc02c77d146e37c1c439c: 2023-07-22 07:11:00,993 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:namespace,,1690009860651.7d65c4912cc14781701102c94fe87b18., pid=8, masterSystemTime=1690009860969 2023-07-22 07:11:00,993 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:rsgroup,,1690009860709.b245454cd52dc02c77d146e37c1c439c., pid=9, masterSystemTime=1690009860969 2023-07-22 07:11:00,996 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:namespace,,1690009860651.7d65c4912cc14781701102c94fe87b18. 2023-07-22 07:11:00,997 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:namespace,,1690009860651.7d65c4912cc14781701102c94fe87b18. 2023-07-22 07:11:00,997 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=6 updating hbase:meta row=7d65c4912cc14781701102c94fe87b18, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,38441,1690009859417 2023-07-22 07:11:00,997 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:namespace,,1690009860651.7d65c4912cc14781701102c94fe87b18.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1690009860997"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690009860997"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690009860997"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690009860997"}]},"ts":"1690009860997"} 2023-07-22 07:11:00,998 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:rsgroup,,1690009860709.b245454cd52dc02c77d146e37c1c439c. 2023-07-22 07:11:00,998 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:rsgroup,,1690009860709.b245454cd52dc02c77d146e37c1c439c. 2023-07-22 07:11:00,998 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=7 updating hbase:meta row=b245454cd52dc02c77d146e37c1c439c, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,44257,1690009859763 2023-07-22 07:11:00,998 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:rsgroup,,1690009860709.b245454cd52dc02c77d146e37c1c439c.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1690009860998"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690009860998"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690009860998"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690009860998"}]},"ts":"1690009860998"} 2023-07-22 07:11:01,001 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=8, resume processing ppid=6 2023-07-22 07:11:01,001 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=8, ppid=6, state=SUCCESS; OpenRegionProcedure 7d65c4912cc14781701102c94fe87b18, server=jenkins-hbase4.apache.org,38441,1690009859417 in 183 msec 2023-07-22 07:11:01,001 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=9, resume processing ppid=7 2023-07-22 07:11:01,001 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=9, ppid=7, state=SUCCESS; OpenRegionProcedure b245454cd52dc02c77d146e37c1c439c, server=jenkins-hbase4.apache.org,44257,1690009859763 in 184 msec 2023-07-22 07:11:01,003 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=6, resume processing ppid=4 2023-07-22 07:11:01,003 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=6, ppid=4, state=SUCCESS; TransitRegionStateProcedure table=hbase:namespace, region=7d65c4912cc14781701102c94fe87b18, ASSIGN in 214 msec 2023-07-22 07:11:01,003 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=7, resume processing ppid=5 2023-07-22 07:11:01,003 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-22 07:11:01,003 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=7, ppid=5, state=SUCCESS; TransitRegionStateProcedure table=hbase:rsgroup, region=b245454cd52dc02c77d146e37c1c439c, ASSIGN in 191 msec 2023-07-22 07:11:01,003 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690009861003"}]},"ts":"1690009861003"} 2023-07-22 07:11:01,004 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-22 07:11:01,004 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:rsgroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690009861004"}]},"ts":"1690009861004"} 2023-07-22 07:11:01,005 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLED in hbase:meta 2023-07-22 07:11:01,005 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=hbase:rsgroup, state=ENABLED in hbase:meta 2023-07-22 07:11:01,007 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_POST_OPERATION 2023-07-22 07:11:01,008 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_POST_OPERATION 2023-07-22 07:11:01,009 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=4, state=SUCCESS; CreateTableProcedure table=hbase:namespace in 356 msec 2023-07-22 07:11:01,009 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=5, state=SUCCESS; CreateTableProcedure table=hbase:rsgroup in 299 msec 2023-07-22 07:11:01,015 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,39207,1690009859236] rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker(840): RSGroup table=hbase:rsgroup is online, refreshing cached information 2023-07-22 07:11:01,015 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,39207,1690009859236] rsgroup.RSGroupInfoManagerImpl(534): Refreshing in Online mode. 2023-07-22 07:11:01,020 DEBUG [Listener at localhost/44075-EventThread] zookeeper.ZKWatcher(600): master:39207-0x1018bde77940000, quorum=127.0.0.1:58374, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-22 07:11:01,020 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,39207,1690009859236] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-22 07:11:01,022 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,39207,1690009859236] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-22 07:11:01,023 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,39207,1690009859236] rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker(823): GroupBasedLoadBalancer is now online 2023-07-22 07:11:01,053 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:39207-0x1018bde77940000, quorum=127.0.0.1:58374, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/namespace 2023-07-22 07:11:01,055 DEBUG [Listener at localhost/44075-EventThread] zookeeper.ZKWatcher(600): master:39207-0x1018bde77940000, quorum=127.0.0.1:58374, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/namespace 2023-07-22 07:11:01,055 DEBUG [Listener at localhost/44075-EventThread] zookeeper.ZKWatcher(600): master:39207-0x1018bde77940000, quorum=127.0.0.1:58374, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-22 07:11:01,058 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-22 07:11:01,059 INFO [RS-EventLoopGroup-13-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:47592, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-22 07:11:01,062 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=10, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=default 2023-07-22 07:11:01,071 DEBUG [Listener at localhost/44075-EventThread] zookeeper.ZKWatcher(600): master:39207-0x1018bde77940000, quorum=127.0.0.1:58374, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-22 07:11:01,074 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=10, state=SUCCESS; CreateNamespaceProcedure, namespace=default in 12 msec 2023-07-22 07:11:01,084 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=11, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=hbase 2023-07-22 07:11:01,089 DEBUG [Listener at localhost/44075-EventThread] zookeeper.ZKWatcher(600): master:39207-0x1018bde77940000, quorum=127.0.0.1:58374, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-22 07:11:01,092 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=11, state=SUCCESS; CreateNamespaceProcedure, namespace=hbase in 8 msec 2023-07-22 07:11:01,100 DEBUG [Listener at localhost/44075-EventThread] zookeeper.ZKWatcher(600): master:39207-0x1018bde77940000, quorum=127.0.0.1:58374, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/default 2023-07-22 07:11:01,102 DEBUG [Listener at localhost/44075-EventThread] zookeeper.ZKWatcher(600): master:39207-0x1018bde77940000, quorum=127.0.0.1:58374, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/hbase 2023-07-22 07:11:01,102 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1083): Master has completed initialization 1.183sec 2023-07-22 07:11:01,102 INFO [master/jenkins-hbase4:0:becomeActiveMaster] quotas.MasterQuotaManager(97): Quota support disabled 2023-07-22 07:11:01,102 INFO [master/jenkins-hbase4:0:becomeActiveMaster] slowlog.SlowLogMasterService(57): Slow/Large requests logging to system table hbase:slowlog is disabled. Quitting. 2023-07-22 07:11:01,102 INFO [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKWatcher(269): not a secure deployment, proceeding 2023-07-22 07:11:01,102 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,39207,1690009859236-ExpiredMobFileCleanerChore, period=86400, unit=SECONDS is enabled. 2023-07-22 07:11:01,102 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,39207,1690009859236-MobCompactionChore, period=604800, unit=SECONDS is enabled. 2023-07-22 07:11:01,103 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1175): Balancer post startup initialization complete, took 0 seconds 2023-07-22 07:11:01,108 DEBUG [Listener at localhost/44075] zookeeper.ReadOnlyZKClient(139): Connect 0x72f6fde0 to 127.0.0.1:58374 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-22 07:11:01,114 DEBUG [Listener at localhost/44075] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@54a596b9, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-22 07:11:01,115 DEBUG [hconnection-0x6bb6c389-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-22 07:11:01,117 INFO [RS-EventLoopGroup-15-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:58110, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-22 07:11:01,118 INFO [Listener at localhost/44075] hbase.HBaseTestingUtility(1145): Minicluster is up; activeMaster=jenkins-hbase4.apache.org,39207,1690009859236 2023-07-22 07:11:01,118 INFO [Listener at localhost/44075] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-22 07:11:01,120 DEBUG [Listener at localhost/44075] ipc.RpcConnection(124): Using SIMPLE authentication for service=MasterService, sasl=false 2023-07-22 07:11:01,121 INFO [RS-EventLoopGroup-12-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:34718, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=MasterService 2023-07-22 07:11:01,124 DEBUG [Listener at localhost/44075-EventThread] zookeeper.ZKWatcher(600): master:39207-0x1018bde77940000, quorum=127.0.0.1:58374, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/balancer 2023-07-22 07:11:01,124 DEBUG [Listener at localhost/44075-EventThread] zookeeper.ZKWatcher(600): master:39207-0x1018bde77940000, quorum=127.0.0.1:58374, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-22 07:11:01,124 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39207] master.MasterRpcServices(492): Client=jenkins//172.31.14.131 set balanceSwitch=false 2023-07-22 07:11:01,125 DEBUG [Listener at localhost/44075] zookeeper.ReadOnlyZKClient(139): Connect 0x17e40f1d to 127.0.0.1:58374 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-22 07:11:01,129 DEBUG [Listener at localhost/44075] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@4a933ce3, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-22 07:11:01,129 INFO [Listener at localhost/44075] zookeeper.RecoverableZooKeeper(93): Process identifier=VerifyingRSGroupAdminClient connecting to ZooKeeper ensemble=127.0.0.1:58374 2023-07-22 07:11:01,133 DEBUG [Listener at localhost/44075-EventThread] zookeeper.ZKWatcher(600): VerifyingRSGroupAdminClient0x0, quorum=127.0.0.1:58374, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-22 07:11:01,133 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): VerifyingRSGroupAdminClient-0x1018bde7794000a connected 2023-07-22 07:11:01,135 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39207] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-22 07:11:01,136 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39207] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-22 07:11:01,139 INFO [Listener at localhost/44075] rsgroup.TestRSGroupsBase(152): Restoring servers: 1 2023-07-22 07:11:01,150 INFO [Listener at localhost/44075] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-22 07:11:01,150 INFO [Listener at localhost/44075] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-22 07:11:01,150 INFO [Listener at localhost/44075] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-22 07:11:01,150 INFO [Listener at localhost/44075] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-22 07:11:01,150 INFO [Listener at localhost/44075] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-22 07:11:01,150 INFO [Listener at localhost/44075] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-22 07:11:01,151 INFO [Listener at localhost/44075] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-22 07:11:01,151 INFO [Listener at localhost/44075] ipc.NettyRpcServer(120): Bind to /172.31.14.131:41151 2023-07-22 07:11:01,152 INFO [Listener at localhost/44075] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-22 07:11:01,153 DEBUG [Listener at localhost/44075] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-22 07:11:01,153 INFO [Listener at localhost/44075] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-22 07:11:01,154 INFO [Listener at localhost/44075] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-22 07:11:01,155 INFO [Listener at localhost/44075] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:41151 connecting to ZooKeeper ensemble=127.0.0.1:58374 2023-07-22 07:11:01,159 DEBUG [Listener at localhost/44075-EventThread] zookeeper.ZKWatcher(600): regionserver:411510x0, quorum=127.0.0.1:58374, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-22 07:11:01,160 DEBUG [Listener at localhost/44075] zookeeper.ZKUtil(162): regionserver:411510x0, quorum=127.0.0.1:58374, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-22 07:11:01,160 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:41151-0x1018bde7794000b connected 2023-07-22 07:11:01,161 DEBUG [Listener at localhost/44075] zookeeper.ZKUtil(162): regionserver:41151-0x1018bde7794000b, quorum=127.0.0.1:58374, baseZNode=/hbase Set watcher on existing znode=/hbase/running 2023-07-22 07:11:01,162 DEBUG [Listener at localhost/44075] zookeeper.ZKUtil(164): regionserver:41151-0x1018bde7794000b, quorum=127.0.0.1:58374, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-22 07:11:01,162 DEBUG [Listener at localhost/44075] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=41151 2023-07-22 07:11:01,162 DEBUG [Listener at localhost/44075] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=41151 2023-07-22 07:11:01,162 DEBUG [Listener at localhost/44075] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=41151 2023-07-22 07:11:01,165 DEBUG [Listener at localhost/44075] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=41151 2023-07-22 07:11:01,165 DEBUG [Listener at localhost/44075] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=41151 2023-07-22 07:11:01,167 INFO [Listener at localhost/44075] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-22 07:11:01,167 INFO [Listener at localhost/44075] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-22 07:11:01,167 INFO [Listener at localhost/44075] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-22 07:11:01,168 INFO [Listener at localhost/44075] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-22 07:11:01,168 INFO [Listener at localhost/44075] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-22 07:11:01,168 INFO [Listener at localhost/44075] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-22 07:11:01,168 INFO [Listener at localhost/44075] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-22 07:11:01,169 INFO [Listener at localhost/44075] http.HttpServer(1146): Jetty bound to port 42819 2023-07-22 07:11:01,169 INFO [Listener at localhost/44075] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-22 07:11:01,170 INFO [Listener at localhost/44075] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-22 07:11:01,170 INFO [Listener at localhost/44075] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@5ed0e996{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/5a1f5817-4556-8293-71dc-0238ce857818/hadoop.log.dir/,AVAILABLE} 2023-07-22 07:11:01,171 INFO [Listener at localhost/44075] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-22 07:11:01,171 INFO [Listener at localhost/44075] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@6946bc81{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-22 07:11:01,283 INFO [Listener at localhost/44075] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-22 07:11:01,284 INFO [Listener at localhost/44075] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-22 07:11:01,284 INFO [Listener at localhost/44075] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-22 07:11:01,284 INFO [Listener at localhost/44075] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-22 07:11:01,285 INFO [Listener at localhost/44075] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-22 07:11:01,286 INFO [Listener at localhost/44075] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@12ac4b0f{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/5a1f5817-4556-8293-71dc-0238ce857818/java.io.tmpdir/jetty-0_0_0_0-42819-hbase-server-2_4_18-SNAPSHOT_jar-_-any-4519021536673745134/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-22 07:11:01,287 INFO [Listener at localhost/44075] server.AbstractConnector(333): Started ServerConnector@49bb275c{HTTP/1.1, (http/1.1)}{0.0.0.0:42819} 2023-07-22 07:11:01,287 INFO [Listener at localhost/44075] server.Server(415): Started @43693ms 2023-07-22 07:11:01,290 INFO [RS:3;jenkins-hbase4:41151] regionserver.HRegionServer(951): ClusterId : 6d321daf-8ef8-4f33-b81c-e1a0fa103c1e 2023-07-22 07:11:01,290 DEBUG [RS:3;jenkins-hbase4:41151] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-22 07:11:01,291 DEBUG [RS:3;jenkins-hbase4:41151] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-22 07:11:01,292 DEBUG [RS:3;jenkins-hbase4:41151] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-22 07:11:01,293 DEBUG [RS:3;jenkins-hbase4:41151] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-22 07:11:01,296 DEBUG [RS:3;jenkins-hbase4:41151] zookeeper.ReadOnlyZKClient(139): Connect 0x665b7fd1 to 127.0.0.1:58374 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-22 07:11:01,301 DEBUG [RS:3;jenkins-hbase4:41151] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@689c9d6a, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-22 07:11:01,301 DEBUG [RS:3;jenkins-hbase4:41151] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@6ca314a1, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-22 07:11:01,309 DEBUG [RS:3;jenkins-hbase4:41151] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:3;jenkins-hbase4:41151 2023-07-22 07:11:01,309 INFO [RS:3;jenkins-hbase4:41151] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-22 07:11:01,309 INFO [RS:3;jenkins-hbase4:41151] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-22 07:11:01,309 DEBUG [RS:3;jenkins-hbase4:41151] regionserver.HRegionServer(1022): About to register with Master. 2023-07-22 07:11:01,310 INFO [RS:3;jenkins-hbase4:41151] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,39207,1690009859236 with isa=jenkins-hbase4.apache.org/172.31.14.131:41151, startcode=1690009861150 2023-07-22 07:11:01,310 DEBUG [RS:3;jenkins-hbase4:41151] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-22 07:11:01,312 INFO [RS-EventLoopGroup-12-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:33153, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.10 (auth:SIMPLE), service=RegionServerStatusService 2023-07-22 07:11:01,312 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=39207] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,41151,1690009861150 2023-07-22 07:11:01,312 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,39207,1690009859236] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-22 07:11:01,313 DEBUG [RS:3;jenkins-hbase4:41151] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:45267/user/jenkins/test-data/bf469a2d-0740-951c-0d71-e8a7443854ab 2023-07-22 07:11:01,313 DEBUG [RS:3;jenkins-hbase4:41151] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:45267 2023-07-22 07:11:01,313 DEBUG [RS:3;jenkins-hbase4:41151] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=44693 2023-07-22 07:11:01,317 DEBUG [Listener at localhost/44075-EventThread] zookeeper.ZKWatcher(600): regionserver:34227-0x1018bde77940002, quorum=127.0.0.1:58374, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-22 07:11:01,317 DEBUG [Listener at localhost/44075-EventThread] zookeeper.ZKWatcher(600): regionserver:44257-0x1018bde77940003, quorum=127.0.0.1:58374, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-22 07:11:01,317 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,39207,1690009859236] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-22 07:11:01,317 DEBUG [Listener at localhost/44075-EventThread] zookeeper.ZKWatcher(600): regionserver:38441-0x1018bde77940001, quorum=127.0.0.1:58374, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-22 07:11:01,317 DEBUG [Listener at localhost/44075-EventThread] zookeeper.ZKWatcher(600): master:39207-0x1018bde77940000, quorum=127.0.0.1:58374, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-22 07:11:01,317 DEBUG [RS:3;jenkins-hbase4:41151] zookeeper.ZKUtil(162): regionserver:41151-0x1018bde7794000b, quorum=127.0.0.1:58374, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,41151,1690009861150 2023-07-22 07:11:01,317 WARN [RS:3;jenkins-hbase4:41151] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-22 07:11:01,317 INFO [RS:3;jenkins-hbase4:41151] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-22 07:11:01,317 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,39207,1690009859236] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-22 07:11:01,317 DEBUG [RS:3;jenkins-hbase4:41151] regionserver.HRegionServer(1948): logDir=hdfs://localhost:45267/user/jenkins/test-data/bf469a2d-0740-951c-0d71-e8a7443854ab/WALs/jenkins-hbase4.apache.org,41151,1690009861150 2023-07-22 07:11:01,317 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:34227-0x1018bde77940002, quorum=127.0.0.1:58374, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,41151,1690009861150 2023-07-22 07:11:01,318 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,41151,1690009861150] 2023-07-22 07:11:01,318 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:44257-0x1018bde77940003, quorum=127.0.0.1:58374, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,41151,1690009861150 2023-07-22 07:11:01,319 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,39207,1690009859236] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 4 2023-07-22 07:11:01,319 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:44257-0x1018bde77940003, quorum=127.0.0.1:58374, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,34227,1690009859598 2023-07-22 07:11:01,319 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:34227-0x1018bde77940002, quorum=127.0.0.1:58374, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,34227,1690009859598 2023-07-22 07:11:01,319 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:38441-0x1018bde77940001, quorum=127.0.0.1:58374, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,41151,1690009861150 2023-07-22 07:11:01,319 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:44257-0x1018bde77940003, quorum=127.0.0.1:58374, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,44257,1690009859763 2023-07-22 07:11:01,320 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:34227-0x1018bde77940002, quorum=127.0.0.1:58374, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,44257,1690009859763 2023-07-22 07:11:01,320 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:44257-0x1018bde77940003, quorum=127.0.0.1:58374, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,38441,1690009859417 2023-07-22 07:11:01,320 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:38441-0x1018bde77940001, quorum=127.0.0.1:58374, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,34227,1690009859598 2023-07-22 07:11:01,321 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:34227-0x1018bde77940002, quorum=127.0.0.1:58374, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,38441,1690009859417 2023-07-22 07:11:01,321 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:38441-0x1018bde77940001, quorum=127.0.0.1:58374, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,44257,1690009859763 2023-07-22 07:11:01,321 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:38441-0x1018bde77940001, quorum=127.0.0.1:58374, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,38441,1690009859417 2023-07-22 07:11:01,321 DEBUG [RS:3;jenkins-hbase4:41151] zookeeper.ZKUtil(162): regionserver:41151-0x1018bde7794000b, quorum=127.0.0.1:58374, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,41151,1690009861150 2023-07-22 07:11:01,321 DEBUG [RS:3;jenkins-hbase4:41151] zookeeper.ZKUtil(162): regionserver:41151-0x1018bde7794000b, quorum=127.0.0.1:58374, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,34227,1690009859598 2023-07-22 07:11:01,322 DEBUG [RS:3;jenkins-hbase4:41151] zookeeper.ZKUtil(162): regionserver:41151-0x1018bde7794000b, quorum=127.0.0.1:58374, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,44257,1690009859763 2023-07-22 07:11:01,322 DEBUG [RS:3;jenkins-hbase4:41151] zookeeper.ZKUtil(162): regionserver:41151-0x1018bde7794000b, quorum=127.0.0.1:58374, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,38441,1690009859417 2023-07-22 07:11:01,323 DEBUG [RS:3;jenkins-hbase4:41151] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-22 07:11:01,323 INFO [RS:3;jenkins-hbase4:41151] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-22 07:11:01,324 INFO [RS:3;jenkins-hbase4:41151] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-22 07:11:01,325 INFO [RS:3;jenkins-hbase4:41151] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-22 07:11:01,325 INFO [RS:3;jenkins-hbase4:41151] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-22 07:11:01,325 INFO [RS:3;jenkins-hbase4:41151] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-22 07:11:01,326 INFO [RS:3;jenkins-hbase4:41151] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-22 07:11:01,326 DEBUG [RS:3;jenkins-hbase4:41151] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-22 07:11:01,326 DEBUG [RS:3;jenkins-hbase4:41151] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-22 07:11:01,326 DEBUG [RS:3;jenkins-hbase4:41151] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-22 07:11:01,327 DEBUG [RS:3;jenkins-hbase4:41151] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-22 07:11:01,327 DEBUG [RS:3;jenkins-hbase4:41151] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-22 07:11:01,327 DEBUG [RS:3;jenkins-hbase4:41151] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-22 07:11:01,327 DEBUG [RS:3;jenkins-hbase4:41151] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-22 07:11:01,327 DEBUG [RS:3;jenkins-hbase4:41151] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-22 07:11:01,327 DEBUG [RS:3;jenkins-hbase4:41151] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-22 07:11:01,327 DEBUG [RS:3;jenkins-hbase4:41151] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-22 07:11:01,331 INFO [RS:3;jenkins-hbase4:41151] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-22 07:11:01,331 INFO [RS:3;jenkins-hbase4:41151] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-22 07:11:01,331 INFO [RS:3;jenkins-hbase4:41151] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-22 07:11:01,344 INFO [RS:3;jenkins-hbase4:41151] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-22 07:11:01,344 INFO [RS:3;jenkins-hbase4:41151] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,41151,1690009861150-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-22 07:11:01,356 INFO [RS:3;jenkins-hbase4:41151] regionserver.Replication(203): jenkins-hbase4.apache.org,41151,1690009861150 started 2023-07-22 07:11:01,356 INFO [RS:3;jenkins-hbase4:41151] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,41151,1690009861150, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:41151, sessionid=0x1018bde7794000b 2023-07-22 07:11:01,356 DEBUG [RS:3;jenkins-hbase4:41151] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-22 07:11:01,356 DEBUG [RS:3;jenkins-hbase4:41151] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,41151,1690009861150 2023-07-22 07:11:01,356 DEBUG [RS:3;jenkins-hbase4:41151] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,41151,1690009861150' 2023-07-22 07:11:01,356 DEBUG [RS:3;jenkins-hbase4:41151] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-22 07:11:01,357 DEBUG [RS:3;jenkins-hbase4:41151] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-22 07:11:01,357 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39207] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-22 07:11:01,357 DEBUG [RS:3;jenkins-hbase4:41151] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-22 07:11:01,357 DEBUG [RS:3;jenkins-hbase4:41151] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-22 07:11:01,357 DEBUG [RS:3;jenkins-hbase4:41151] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,41151,1690009861150 2023-07-22 07:11:01,357 DEBUG [RS:3;jenkins-hbase4:41151] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,41151,1690009861150' 2023-07-22 07:11:01,357 DEBUG [RS:3;jenkins-hbase4:41151] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-22 07:11:01,358 DEBUG [RS:3;jenkins-hbase4:41151] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-22 07:11:01,358 DEBUG [RS:3;jenkins-hbase4:41151] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-22 07:11:01,358 INFO [RS:3;jenkins-hbase4:41151] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-22 07:11:01,358 INFO [RS:3;jenkins-hbase4:41151] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-22 07:11:01,359 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39207] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-22 07:11:01,359 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39207] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-22 07:11:01,361 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39207] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-22 07:11:01,362 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39207] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-22 07:11:01,363 DEBUG [hconnection-0x65064230-metaLookup-shared--pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-22 07:11:01,365 INFO [RS-EventLoopGroup-15-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:58116, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-22 07:11:01,369 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39207] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-22 07:11:01,369 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39207] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-22 07:11:01,371 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39207] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:39207] to rsgroup master 2023-07-22 07:11:01,372 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39207] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:39207 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-22 07:11:01,372 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39207] ipc.CallRunner(144): callId: 20 service: MasterService methodName: ExecMasterService size: 118 connection: 172.31.14.131:34718 deadline: 1690011061371, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:39207 is either offline or it does not exist. 2023-07-22 07:11:01,372 WARN [Listener at localhost/44075] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:39207 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:39207 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-22 07:11:01,373 INFO [Listener at localhost/44075] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-22 07:11:01,374 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39207] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-22 07:11:01,374 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39207] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-22 07:11:01,374 INFO [Listener at localhost/44075] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:34227, jenkins-hbase4.apache.org:38441, jenkins-hbase4.apache.org:41151, jenkins-hbase4.apache.org:44257], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-22 07:11:01,374 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39207] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-22 07:11:01,374 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39207] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-22 07:11:01,424 INFO [Listener at localhost/44075] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testRSGroupListDoesNotContainFailedTableCreation Thread=553 (was 510) Potentially hanging thread: Listener at localhost/37479-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: qtp673983139-2285 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins.hfs.5@localhost:45035 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-15-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:56037@0x04f4a10d-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1457451847_17 at /127.0.0.1:39362 [Receiving block BP-294161273-172.31.14.131-1690009858285:blk_1073741835_1011] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-12-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 1 on default port 45465 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1457451847_17 at /127.0.0.1:33558 [Receiving block BP-294161273-172.31.14.131-1690009858285:blk_1073741833_1009] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Session-HouseKeeper-59353d1d-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: pool-551-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:58374@0x72f6fde0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/1353063304.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost:45267/user/jenkins/test-data/bf469a2d-0740-951c-0d71-e8a7443854ab-prefix:jenkins-hbase4.apache.org,44257,1690009859763.meta sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1457451847_17 at /127.0.0.1:32972 [Receiving block BP-294161273-172.31.14.131-1690009858285:blk_1073741835_1011] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39207 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=44257 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/5a1f5817-4556-8293-71dc-0238ce857818/cluster_f0188540-95af-947d-8415-f9a1cf2f35cc/dfs/data/data5/current/BP-294161273-172.31.14.131-1690009858285 java.lang.Thread.sleep(Native Method) org.apache.hadoop.fs.CachingGetSpaceUsed$RefreshThread.run(CachingGetSpaceUsed.java:179) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (56390283) connection to localhost/127.0.0.1:45035 from jenkins.hfs.5 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: RS-EventLoopGroup-16-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1993138545-2298 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=34227 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp346051425-2254 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (56390283) connection to localhost/127.0.0.1:45035 from jenkins.hfs.6 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1457451847_17 at /127.0.0.1:33584 [Receiving block BP-294161273-172.31.14.131-1690009858285:blk_1073741835_1011] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:2;jenkins-hbase4:44257 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:81) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:64) org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1092) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.runRegionServer(MiniHBaseCluster.java:175) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.access$000(MiniHBaseCluster.java:123) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer$1.run(MiniHBaseCluster.java:159) java.security.AccessController.doPrivileged(Native Method) javax.security.auth.Subject.doAs(Subject.java:360) org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1873) org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:319) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.run(MiniHBaseCluster.java:156) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: 403831903@qtp-1354528489-0 java.lang.Object.wait(Native Method) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:626) Potentially hanging thread: org.apache.hadoop.hdfs.server.datanode.DataXceiverServer@52f44e2f sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:113) org.apache.hadoop.hdfs.net.TcpPeerServer.accept(TcpPeerServer.java:85) org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:145) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x65064230-shared-pool-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x3e956335-shared-pool-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer-32 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: Timer-24 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,39207,1690009859236 java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread.run(RSGroupInfoManagerImpl.java:797) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:58374@0x7d0bec28 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/1353063304.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_646894658_17 at /127.0.0.1:52030 [Receiving block BP-294161273-172.31.14.131-1690009858285:blk_1073741829_1005] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp2050810309-2558-acceptor-0@590f0c08-ServerConnector@49bb275c{HTTP/1.1, (http/1.1)}{0.0.0.0:42819} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.metaPriority.FPBQ.Fifo.handler=0,queue=0,port=38441 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: IPC Server handler 1 on default port 45267 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:58374@0x665b7fd1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/1353063304.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer-27 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:58374@0x1a2a3278-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: Listener at localhost/44075-SendThread(127.0.0.1:58374) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: IPC Server idle connection scanner for port 45465 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:56037@0x04f4a10d-SendThread(127.0.0.1:56037) java.lang.Thread.sleep(Native Method) org.apache.zookeeper.client.StaticHostProvider.next(StaticHostProvider.java:369) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1137) Potentially hanging thread: org.apache.hadoop.util.JvmPauseMonitor$Monitor@28018f9e java.lang.Thread.sleep(Native Method) org.apache.hadoop.util.JvmPauseMonitor$Monitor.run(JvmPauseMonitor.java:192) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.hdfs.server.blockmanagement.PendingReplicationBlocks$PendingReplicationMonitor@6ccf0b27 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.server.blockmanagement.PendingReplicationBlocks$PendingReplicationMonitor.run(PendingReplicationBlocks.java:244) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=38441 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: IPC Server handler 0 on default port 45465 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34227 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp1483993686-2222 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1483993686-2221 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-294161273-172.31.14.131-1690009858285:blk_1073741832_1008, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-11-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=41151 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RS-EventLoopGroup-8-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-13 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 3 on default port 46557 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=38441 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: IPC Client (56390283) connection to localhost/127.0.0.1:45267 from jenkins.hfs.10 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:58374@0x31d2ea36-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: org.apache.hadoop.hdfs.server.namenode.LeaseManager$Monitor@5c80a639 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.server.namenode.LeaseManager$Monitor.run(LeaseManager.java:528) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-9-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-11 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:879) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Session-HouseKeeper-27138b95-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:1;jenkins-hbase4:34227-longCompactions-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.util.StealJobQueue.take(StealJobQueue.java:101) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1993138545-2293 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/737117775.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Session-HouseKeeper-6673ca8a-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38441 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: jenkins-hbase4:41151Replication Statistics #0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 1 on default port 44075 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1704187382_17 at /127.0.0.1:32958 [Receiving block BP-294161273-172.31.14.131-1690009858285:blk_1073741832_1008] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:58374@0x7d0bec28-SendThread(127.0.0.1:58374) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: qtp673983139-2280 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/737117775.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-10-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins.hfs.7@localhost:45267 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-12 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: BP-294161273-172.31.14.131-1690009858285 heartbeating to localhost/127.0.0.1:45267 java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.IncrementalBlockReportManager.waitTillNextIBR(IncrementalBlockReportManager.java:158) org.apache.hadoop.hdfs.server.datanode.BPServiceActor.offerService(BPServiceActor.java:715) org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:846) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.util.JvmPauseMonitor$Monitor@68b46d87 java.lang.Thread.sleep(Native Method) org.apache.hadoop.util.JvmPauseMonitor$Monitor.run(JvmPauseMonitor.java:192) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1993138545-2299 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: java.util.concurrent.ThreadPoolExecutor$Worker@19c7700b[State = -1, empty queue] sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=34227 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RS-EventLoopGroup-11-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=34227 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: 972879134@qtp-1354528489-1 - Acceptor0 HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:33171 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.mortbay.io.nio.SelectorManager$SelectSet.doSelect(SelectorManager.java:498) org.mortbay.io.nio.SelectorManager.doSelect(SelectorManager.java:192) org.mortbay.jetty.nio.SelectChannelConnector.accept(SelectChannelConnector.java:124) org.mortbay.jetty.AbstractConnector$Acceptor.run(AbstractConnector.java:708) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582) Potentially hanging thread: IPC Client (56390283) connection to localhost/127.0.0.1:45035 from jenkins java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: IPC Server handler 0 on default port 44075 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: IPC Server handler 0 on default port 45267 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: nioEventLoopGroup-18-1 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:803) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:457) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:58374@0x27b7efb5-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: qtp1483993686-2223 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/44075 java.lang.Thread.dumpThreads(Native Method) java.lang.Thread.getAllStackTraces(Thread.java:1615) org.apache.hadoop.hbase.ResourceCheckerJUnitListener$ThreadResourceAnalyzer.getVal(ResourceCheckerJUnitListener.java:49) org.apache.hadoop.hbase.ResourceChecker.fill(ResourceChecker.java:110) org.apache.hadoop.hbase.ResourceChecker.fillEndings(ResourceChecker.java:104) org.apache.hadoop.hbase.ResourceChecker.end(ResourceChecker.java:206) org.apache.hadoop.hbase.ResourceCheckerJUnitListener.end(ResourceCheckerJUnitListener.java:165) org.apache.hadoop.hbase.ResourceCheckerJUnitListener.testFinished(ResourceCheckerJUnitListener.java:185) org.junit.runner.notification.SynchronizedRunListener.testFinished(SynchronizedRunListener.java:87) org.junit.runner.notification.RunNotifier$9.notifyListener(RunNotifier.java:225) org.junit.runner.notification.RunNotifier$SafeNotifier.run(RunNotifier.java:72) org.junit.runner.notification.RunNotifier.fireTestFinished(RunNotifier.java:222) org.junit.internal.runners.model.EachTestNotifier.fireTestFinished(EachTestNotifier.java:38) org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:372) org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) java.util.concurrent.FutureTask.run(FutureTask.java:266) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/44075.LruBlockCache.EvictionThread java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.io.hfile.LruBlockCache$EvictionThread.run(LruBlockCache.java:902) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:58374@0x1a2a3278-SendThread(127.0.0.1:58374) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: Listener at localhost/44075-SendThread(127.0.0.1:58374) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:58374@0x31d2ea36 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/1353063304.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins.hfs.6@localhost:45035 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: VolumeScannerThread(/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/5a1f5817-4556-8293-71dc-0238ce857818/cluster_f0188540-95af-947d-8415-f9a1cf2f35cc/dfs/data/data6) java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:627) Potentially hanging thread: RS-EventLoopGroup-15-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost:45267/user/jenkins/test-data/bf469a2d-0740-951c-0d71-e8a7443854ab-prefix:jenkins-hbase4.apache.org,38441,1690009859417 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/44075-SendThread(127.0.0.1:58374) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: NIOServerCxnFactory.AcceptThread:localhost/127.0.0.1:58374 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.zookeeper.server.NIOServerCnxnFactory$AcceptThread.select(NIOServerCnxnFactory.java:229) org.apache.zookeeper.server.NIOServerCnxnFactory$AcceptThread.run(NIOServerCnxnFactory.java:205) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41151 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:56037@0x04f4a10d sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/1353063304.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_646894658_17 at /127.0.0.1:56754 [Receiving block BP-294161273-172.31.14.131-1690009858285:blk_1073741829_1005] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: CacheReplicationMonitor(1832364051) sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2163) org.apache.hadoop.hdfs.server.blockmanagement.CacheReplicationMonitor.run(CacheReplicationMonitor.java:181) Potentially hanging thread: IPC Server handler 1 on default port 46557 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=41151 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RS-EventLoopGroup-9-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:58374@0x7d0bec28-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: RS-EventLoopGroup-16-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: pool-555-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=39207 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=39207 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: IPC Client (56390283) connection to localhost/127.0.0.1:45267 from jenkins.hfs.8 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: Listener at localhost/44075.LruBlockCache.EvictionThread java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.io.hfile.LruBlockCache$EvictionThread.run(LruBlockCache.java:902) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:58374@0x27b7efb5 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/1353063304.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-439371282_17 at /127.0.0.1:39332 [Receiving block BP-294161273-172.31.14.131-1690009858285:blk_1073741834_1010] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:0;jenkins-hbase4:38441-longCompactions-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.util.StealJobQueue.take(StealJobQueue.java:101) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp673983139-2281-acceptor-0@29062cd9-ServerConnector@53d5e86c{HTTP/1.1, (http/1.1)}{0.0.0.0:33697} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost:45267/user/jenkins/test-data/bf469a2d-0740-951c-0d71-e8a7443854ab/MasterData-prefix:jenkins-hbase4.apache.org,39207,1690009859236 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:58374@0x665b7fd1-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: hconnection-0x3e956335-metaLookup-shared--pool-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/44075-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: hconnection-0x65064230-metaLookup-shared--pool-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-294161273-172.31.14.131-1690009858285:blk_1073741835_1011, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1457451847_17 at /127.0.0.1:39346 [Waiting for operation #2] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/44075-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: Listener at localhost/44075-SendThread(127.0.0.1:58374) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: qtp1267279570-2191 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:58374@0x31d2ea36-SendThread(127.0.0.1:58374) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=38441 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: PacketResponder: BP-294161273-172.31.14.131-1690009858285:blk_1073741829_1005, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Session-HouseKeeper-71c75751-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: VolumeScannerThread(/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/5a1f5817-4556-8293-71dc-0238ce857818/cluster_f0188540-95af-947d-8415-f9a1cf2f35cc/dfs/data/data3) java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:627) Potentially hanging thread: Timer-33 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: IPC Server idle connection scanner for port 45267 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: qtp2050810309-2561 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-294161273-172.31.14.131-1690009858285:blk_1073741832_1008, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38441 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_646894658_17 at /127.0.0.1:32876 [Receiving block BP-294161273-172.31.14.131-1690009858285:blk_1073741829_1005] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-10-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp346051425-2256 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-294161273-172.31.14.131-1690009858285:blk_1073741835_1011, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Session-HouseKeeper-74bddb36-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1483993686-2225 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: BP-294161273-172.31.14.131-1690009858285 heartbeating to localhost/127.0.0.1:45267 java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.IncrementalBlockReportManager.waitTillNextIBR(IncrementalBlockReportManager.java:158) org.apache.hadoop.hdfs.server.datanode.BPServiceActor.offerService(BPServiceActor.java:715) org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:846) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:58374@0x72f6fde0-SendThread(127.0.0.1:58374) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: jenkins-hbase4:39207 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.master.assignment.AssignmentManager.waitOnAssignQueue(AssignmentManager.java:2102) org.apache.hadoop.hbase.master.assignment.AssignmentManager.processAssignQueue(AssignmentManager.java:2124) org.apache.hadoop.hbase.master.assignment.AssignmentManager.access$600(AssignmentManager.java:104) org.apache.hadoop.hbase.master.assignment.AssignmentManager$1.run(AssignmentManager.java:2064) Potentially hanging thread: VolumeScannerThread(/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/5a1f5817-4556-8293-71dc-0238ce857818/cluster_f0188540-95af-947d-8415-f9a1cf2f35cc/dfs/data/data1) java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:627) Potentially hanging thread: IPC Server handler 3 on default port 45465 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: qtp346051425-2252 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x3e956335-shared-pool-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/44075-SendThread(127.0.0.1:58374) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39207 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: pool-560-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 2 on default port 44075 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: PacketResponder: BP-294161273-172.31.14.131-1690009858285:blk_1073741829_1005, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.metaPriority.FPBQ.Fifo.handler=0,queue=0,port=44257 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RS-EventLoopGroup-15-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:58374@0x72f6fde0-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/5a1f5817-4556-8293-71dc-0238ce857818/cluster_f0188540-95af-947d-8415-f9a1cf2f35cc/dfs/data/data3/current/BP-294161273-172.31.14.131-1690009858285 java.lang.Thread.sleep(Native Method) org.apache.hadoop.fs.CachingGetSpaceUsed$RefreshThread.run(CachingGetSpaceUsed.java:179) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1267279570-2190 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1993138545-2294 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/737117775.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-14-1 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:803) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:457) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=44257 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: hconnection-0x6bb6c389-shared-pool-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: pool-564-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: 813329857@qtp-187687512-0 java.lang.Object.wait(Native Method) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:626) Potentially hanging thread: RpcServer.metaPriority.FPBQ.Fifo.handler=0,queue=0,port=39207 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RS-EventLoopGroup-12-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-294161273-172.31.14.131-1690009858285:blk_1073741834_1010, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41151 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: PacketResponder: BP-294161273-172.31.14.131-1690009858285:blk_1073741834_1010, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-294161273-172.31.14.131-1690009858285:blk_1073741833_1009, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server idle connection scanner for port 46557 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: IPC Client (56390283) connection to localhost/127.0.0.1:45035 from jenkins.hfs.4 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,34769,1690009852974 java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread.run(RSGroupInfoManagerImpl.java:797) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=34227 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RS:1;jenkins-hbase4:34227 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:81) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:64) org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1092) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.runRegionServer(MiniHBaseCluster.java:175) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.access$000(MiniHBaseCluster.java:123) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer$1.run(MiniHBaseCluster.java:159) java.security.AccessController.doPrivileged(Native Method) javax.security.auth.Subject.doAs(Subject.java:360) org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1873) org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:319) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.run(MiniHBaseCluster.java:156) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 3 on default port 44075 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: RS-EventLoopGroup-8-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-8-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:58374@0x27b7efb5-SendThread(127.0.0.1:58374) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: IPC Server handler 4 on default port 46557 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: IPC Server handler 2 on default port 45465 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: M:0;jenkins-hbase4:39207 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:81) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:64) org.apache.hadoop.hbase.master.HMaster.waitForMasterActive(HMaster.java:634) org.apache.hadoop.hbase.regionserver.HRegionServer.initializeZooKeeper(HRegionServer.java:957) org.apache.hadoop.hbase.regionserver.HRegionServer.preRegistrationInitialization(HRegionServer.java:904) org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1006) org.apache.hadoop.hbase.master.HMaster.run(HMaster.java:541) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=34227 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RS-EventLoopGroup-13-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/44075-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: LeaseRenewer:jenkins.hfs.8@localhost:45267 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:58374@0x665b7fd1-SendThread(127.0.0.1:58374) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: IPC Server idle connection scanner for port 44075 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=39207 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: PacketResponder: BP-294161273-172.31.14.131-1690009858285:blk_1073741834_1010, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: 1094789972@qtp-582745635-0 java.lang.Object.wait(Native Method) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:626) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-10 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:879) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=34227 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp1483993686-2224 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp2050810309-2557 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/737117775.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:58374@0x17e40f1d-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: qtp1267279570-2195 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp346051425-2250-acceptor-0@2f555deb-ServerConnector@373e134e{HTTP/1.1, (http/1.1)}{0.0.0.0:39473} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: java.util.concurrent.ThreadPoolExecutor$Worker@254416fa[State = -1, empty queue] sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp673983139-2287 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-11-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.hdfs.server.namenode.FSNamesystem$LazyPersistFileScrubber@4f269b40 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.server.namenode.FSNamesystem$LazyPersistFileScrubber.run(FSNamesystem.java:3975) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=38441 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=41151 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: Timer-35 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: qtp1267279570-2194 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer for 'DataNode' metrics system java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: PacketResponder: BP-294161273-172.31.14.131-1690009858285:blk_1073741833_1009, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/44075.LruBlockCache.EvictionThread java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.io.hfile.LruBlockCache$EvictionThread.run(LruBlockCache.java:902) Potentially hanging thread: hconnection-0x3e956335-shared-pool-3 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: pool-544-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/5a1f5817-4556-8293-71dc-0238ce857818/cluster_f0188540-95af-947d-8415-f9a1cf2f35cc/dfs/data/data1/current/BP-294161273-172.31.14.131-1690009858285 java.lang.Thread.sleep(Native Method) org.apache.hadoop.fs.CachingGetSpaceUsed$RefreshThread.run(CachingGetSpaceUsed.java:179) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=41151 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp673983139-2286 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 2 on default port 46557 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: qtp1267279570-2193 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:58374@0x17e40f1d-SendThread(127.0.0.1:58374) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: org.apache.hadoop.util.JvmPauseMonitor$Monitor@2a0d1eb2 java.lang.Thread.sleep(Native Method) org.apache.hadoop.util.JvmPauseMonitor$Monitor.run(JvmPauseMonitor.java:192) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=38441 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: pool-550-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x3e956335-shared-pool-5 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (56390283) connection to localhost/127.0.0.1:45267 from jenkins.hfs.7 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1457451847_17 at /127.0.0.1:39314 [Receiving block BP-294161273-172.31.14.131-1690009858285:blk_1073741833_1009] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=44257 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44257 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: pool-545-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=34227 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RS-EventLoopGroup-13-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1457451847_17 at /127.0.0.1:32942 [Receiving block BP-294161273-172.31.14.131-1690009858285:blk_1073741833_1009] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp673983139-2282 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=39207 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: ProcessThread(sid:0 cport:58374): sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.server.PrepRequestProcessor.run(PrepRequestProcessor.java:134) Potentially hanging thread: qtp346051425-2253 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1267279570-2192 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.hdfs.server.datanode.DataXceiverServer@4800f182 sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:113) org.apache.hadoop.hdfs.net.TcpPeerServer.accept(TcpPeerServer.java:85) org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:145) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer-34 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: jenkins-hbase4:38441Replication Statistics #0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins@localhost:45035 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: 19671955@qtp-740731438-0 java.lang.Object.wait(Native Method) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:626) Potentially hanging thread: org.apache.hadoop.util.JvmPauseMonitor$Monitor@3f55905a java.lang.Thread.sleep(Native Method) org.apache.hadoop.util.JvmPauseMonitor$Monitor.run(JvmPauseMonitor.java:192) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1993138545-2296-acceptor-0@10209d77-ServerConnector@780b01d8{HTTP/1.1, (http/1.1)}{0.0.0.0:38169} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp346051425-2251 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins.hfs.9@localhost:45267 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-294161273-172.31.14.131-1690009858285:blk_1073741835_1011, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:0;jenkins-hbase4:38441 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:81) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:64) org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1092) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.runRegionServer(MiniHBaseCluster.java:175) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.access$000(MiniHBaseCluster.java:123) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer$1.run(MiniHBaseCluster.java:159) java.security.AccessController.doPrivileged(Native Method) javax.security.auth.Subject.doAs(Subject.java:360) org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1873) org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:319) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.run(MiniHBaseCluster.java:156) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1457451847_17 at /127.0.0.1:56718 [Waiting for operation #4] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.metaPriority.FPBQ.Fifo.handler=0,queue=0,port=41151 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp346051425-2255 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost:45267/user/jenkins/test-data/bf469a2d-0740-951c-0d71-e8a7443854ab-prefix:jenkins-hbase4.apache.org,34227,1690009859598 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:2;jenkins-hbase4:44257-longCompactions-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.util.StealJobQueue.take(StealJobQueue.java:101) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp673983139-2283 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 4 on default port 45267 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: IPC Client (56390283) connection to localhost/127.0.0.1:45267 from jenkins java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: Timer-29 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: pool-541-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins@localhost:45267 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44257 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/5a1f5817-4556-8293-71dc-0238ce857818/cluster_f0188540-95af-947d-8415-f9a1cf2f35cc/dfs/data/data4/current/BP-294161273-172.31.14.131-1690009858285 java.lang.Thread.sleep(Native Method) org.apache.hadoop.fs.CachingGetSpaceUsed$RefreshThread.run(CachingGetSpaceUsed.java:179) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/44075-SendThread(127.0.0.1:58374) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: RS-EventLoopGroup-13-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer-25 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: qtp2050810309-2564 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=39207 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/5a1f5817-4556-8293-71dc-0238ce857818/cluster_f0188540-95af-947d-8415-f9a1cf2f35cc/dfs/data/data2/current/BP-294161273-172.31.14.131-1690009858285 java.lang.Thread.sleep(Native Method) org.apache.hadoop.fs.CachingGetSpaceUsed$RefreshThread.run(CachingGetSpaceUsed.java:179) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 2 on default port 45267 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost:45267/user/jenkins/test-data/bf469a2d-0740-951c-0d71-e8a7443854ab-prefix:jenkins-hbase4.apache.org,44257,1690009859763 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x3e956335-shared-pool-2 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp2050810309-2559 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=41151 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp673983139-2284 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1483993686-2220-acceptor-0@61e50506-ServerConnector@57cc7d77{HTTP/1.1, (http/1.1)}{0.0.0.0:44801} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=44257 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:58374@0x1a2a3278 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/1353063304.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (56390283) connection to localhost/127.0.0.1:45267 from jenkins java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: qtp1267279570-2188 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/737117775.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 4 on default port 44075 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-14 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp346051425-2249 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/737117775.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 0 on default port 46557 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: IPC Client (56390283) connection to localhost/127.0.0.1:45035 from jenkins java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: PacketResponder: BP-294161273-172.31.14.131-1690009858285:blk_1073741829_1005, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/44075-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: RS-EventLoopGroup-10-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/5a1f5817-4556-8293-71dc-0238ce857818/cluster_f0188540-95af-947d-8415-f9a1cf2f35cc/dfs/data/data6/current/BP-294161273-172.31.14.131-1690009858285 java.lang.Thread.sleep(Native Method) org.apache.hadoop.fs.CachingGetSpaceUsed$RefreshThread.run(CachingGetSpaceUsed.java:179) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:3;jenkins-hbase4:41151-longCompactions-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.util.StealJobQueue.take(StealJobQueue.java:101) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1690009860100 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.PriorityBlockingQueue.take(PriorityBlockingQueue.java:549) org.apache.hadoop.hbase.master.cleaner.HFileCleaner.consumerLoop(HFileCleaner.java:267) org.apache.hadoop.hbase.master.cleaner.HFileCleaner$2.run(HFileCleaner.java:251) Potentially hanging thread: VolumeScannerThread(/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/5a1f5817-4556-8293-71dc-0238ce857818/cluster_f0188540-95af-947d-8415-f9a1cf2f35cc/dfs/data/data2) java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:627) Potentially hanging thread: nioEventLoopGroup-16-1 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:803) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:457) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-14-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:3;jenkins-hbase4:41151 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:81) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:64) org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1092) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.runRegionServer(MiniHBaseCluster.java:175) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.access$000(MiniHBaseCluster.java:123) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer$1.run(MiniHBaseCluster.java:159) java.security.AccessController.doPrivileged(Native Method) javax.security.auth.Subject.doAs(Subject.java:360) org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1873) org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:319) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.run(MiniHBaseCluster.java:156) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=38441 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: 1068175579@qtp-740731438-1 - Acceptor0 HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:43507 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.mortbay.io.nio.SelectorManager$SelectSet.doSelect(SelectorManager.java:498) org.mortbay.io.nio.SelectorManager.doSelect(SelectorManager.java:192) org.mortbay.jetty.nio.SelectChannelConnector.accept(SelectChannelConnector.java:124) org.mortbay.jetty.AbstractConnector$Acceptor.run(AbstractConnector.java:708) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582) Potentially hanging thread: IPC Client (56390283) connection to localhost/127.0.0.1:45267 from jenkins.hfs.9 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: jenkins-hbase4:34227Replication Statistics #0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-294161273-172.31.14.131-1690009858285:blk_1073741833_1009, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1690009860099 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.util.StealJobQueue.take(StealJobQueue.java:101) org.apache.hadoop.hbase.master.cleaner.HFileCleaner.consumerLoop(HFileCleaner.java:267) org.apache.hadoop.hbase.master.cleaner.HFileCleaner$1.run(HFileCleaner.java:236) Potentially hanging thread: java.util.concurrent.ThreadPoolExecutor$Worker@24a1c7a4[State = -1, empty queue] sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp2050810309-2562 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1267279570-2189-acceptor-0@509fad42-ServerConnector@371cb2cf{HTTP/1.1, (http/1.1)}{0.0.0.0:44693} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=39207 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=44257 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: hconnection-0x3e956335-metaLookup-shared--pool-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: 1280191429@qtp-582745635-1 - Acceptor0 HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:38025 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.mortbay.io.nio.SelectorManager$SelectSet.doSelect(SelectorManager.java:498) org.mortbay.io.nio.SelectorManager.doSelect(SelectorManager.java:192) org.mortbay.jetty.nio.SelectChannelConnector.accept(SelectChannelConnector.java:124) org.mortbay.jetty.AbstractConnector$Acceptor.run(AbstractConnector.java:708) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582) Potentially hanging thread: qtp1993138545-2292 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/737117775.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 4 on default port 45465 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: PacketResponder: BP-294161273-172.31.14.131-1690009858285:blk_1073741832_1008, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: VolumeScannerThread(/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/5a1f5817-4556-8293-71dc-0238ce857818/cluster_f0188540-95af-947d-8415-f9a1cf2f35cc/dfs/data/data5) java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:627) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-439371282_17 at /127.0.0.1:33622 [Waiting for operation #2] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x3e956335-shared-pool-4 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1483993686-2219 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/737117775.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: jenkins-hbase4:44257Replication Statistics #0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: 1550003140@qtp-187687512-1 - Acceptor0 HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:39487 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.mortbay.io.nio.SelectorManager$SelectSet.doSelect(SelectorManager.java:498) org.mortbay.io.nio.SelectorManager.doSelect(SelectorManager.java:192) org.mortbay.jetty.nio.SelectChannelConnector.accept(SelectChannelConnector.java:124) org.mortbay.jetty.AbstractConnector$Acceptor.run(AbstractConnector.java:708) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582) Potentially hanging thread: org.apache.hadoop.hdfs.server.namenode.FSNamesystem$NameNodeResourceMonitor@257227ca java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.server.namenode.FSNamesystem$NameNodeResourceMonitor.run(FSNamesystem.java:3842) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 3 on default port 45267 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-439371282_17 at /127.0.0.1:33578 [Receiving block BP-294161273-172.31.14.131-1690009858285:blk_1073741834_1010] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:58374@0x17e40f1d sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/1353063304.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=39207 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp1993138545-2297 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins.hfs.4@localhost:45035 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1704187382_17 at /127.0.0.1:39328 [Receiving block BP-294161273-172.31.14.131-1690009858285:blk_1073741832_1008] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: pool-546-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1704187382_17 at /127.0.0.1:33564 [Receiving block BP-294161273-172.31.14.131-1690009858285:blk_1073741832_1008] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1483993686-2226 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=41151 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: Timer-31 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: qtp2050810309-2563 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer-26 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=44257 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: org.apache.hadoop.hdfs.server.blockmanagement.HeartbeatManager$Monitor@23b1628c java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.server.blockmanagement.HeartbeatManager$Monitor.run(HeartbeatManager.java:451) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer-30 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: qtp2050810309-2560 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: VolumeScannerThread(/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/5a1f5817-4556-8293-71dc-0238ce857818/cluster_f0188540-95af-947d-8415-f9a1cf2f35cc/dfs/data/data4) java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:627) Potentially hanging thread: Listener at localhost/37479-SendThread(127.0.0.1:56037) java.lang.Thread.sleep(Native Method) org.apache.zookeeper.ClientCnxn$SendThread.startConnect(ClientCnxn.java:1072) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1139) Potentially hanging thread: RS-EventLoopGroup-12-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.hdfs.server.datanode.DataXceiverServer@7da13881 sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:113) org.apache.hadoop.hdfs.net.TcpPeerServer.accept(TcpPeerServer.java:85) org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:145) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.hdfs.server.namenode.FSNamesystem$NameNodeEditLogRoller@68aace56 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.server.namenode.FSNamesystem$NameNodeEditLogRoller.run(FSNamesystem.java:3884) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1993138545-2295 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/737117775.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: BP-294161273-172.31.14.131-1690009858285 heartbeating to localhost/127.0.0.1:45267 java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.IncrementalBlockReportManager.waitTillNextIBR(IncrementalBlockReportManager.java:158) org.apache.hadoop.hdfs.server.datanode.BPServiceActor.offerService(BPServiceActor.java:715) org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:846) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: pool-539-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.metaPriority.FPBQ.Fifo.handler=0,queue=0,port=34227 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: Listener at localhost/44075.LruBlockCache.EvictionThread java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.io.hfile.LruBlockCache$EvictionThread.run(LruBlockCache.java:902) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=44257 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-439371282_17 at /127.0.0.1:32964 [Receiving block BP-294161273-172.31.14.131-1690009858285:blk_1073741834_1010] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=41151 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=38441 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34227 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: Listener at localhost/44075-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: pool-559-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer-28 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: Listener at localhost/44075-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) - Thread LEAK? -, OpenFileDescriptor=816 (was 793) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=366 (was 378), ProcessCount=180 (was 180), AvailableMemoryMB=6415 (was 6574) 2023-07-22 07:11:01,427 WARN [Listener at localhost/44075] hbase.ResourceChecker(130): Thread=553 is superior to 500 2023-07-22 07:11:01,445 INFO [Listener at localhost/44075] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testNotMoveTableToNullRSGroupWhenCreatingExistingTable Thread=553, OpenFileDescriptor=816, MaxFileDescriptor=60000, SystemLoadAverage=366, ProcessCount=180, AvailableMemoryMB=6413 2023-07-22 07:11:01,445 WARN [Listener at localhost/44075] hbase.ResourceChecker(130): Thread=553 is superior to 500 2023-07-22 07:11:01,446 INFO [Listener at localhost/44075] rsgroup.TestRSGroupsBase(132): testNotMoveTableToNullRSGroupWhenCreatingExistingTable 2023-07-22 07:11:01,449 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39207] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-22 07:11:01,449 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39207] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-22 07:11:01,450 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39207] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-22 07:11:01,450 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39207] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-22 07:11:01,450 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39207] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-22 07:11:01,451 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39207] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-22 07:11:01,451 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39207] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-22 07:11:01,452 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39207] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-22 07:11:01,455 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39207] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-22 07:11:01,456 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39207] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-22 07:11:01,458 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39207] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-22 07:11:01,460 INFO [RS:3;jenkins-hbase4:41151] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C41151%2C1690009861150, suffix=, logDir=hdfs://localhost:45267/user/jenkins/test-data/bf469a2d-0740-951c-0d71-e8a7443854ab/WALs/jenkins-hbase4.apache.org,41151,1690009861150, archiveDir=hdfs://localhost:45267/user/jenkins/test-data/bf469a2d-0740-951c-0d71-e8a7443854ab/oldWALs, maxLogs=32 2023-07-22 07:11:01,461 INFO [Listener at localhost/44075] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-22 07:11:01,462 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39207] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-22 07:11:01,468 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39207] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-22 07:11:01,469 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39207] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-22 07:11:01,471 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39207] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-22 07:11:01,472 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39207] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-22 07:11:01,481 DEBUG [RS-EventLoopGroup-16-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:43305,DS-eba9c6db-5917-47d2-ac38-49a4525b7417,DISK] 2023-07-22 07:11:01,481 DEBUG [RS-EventLoopGroup-16-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:40325,DS-569084ad-ab2f-4422-bbd3-3b09e4616e87,DISK] 2023-07-22 07:11:01,482 DEBUG [RS-EventLoopGroup-16-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:46471,DS-54862990-c4ed-4f54-b35e-aa588a06880e,DISK] 2023-07-22 07:11:01,485 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39207] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-22 07:11:01,485 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39207] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-22 07:11:01,486 INFO [RS:3;jenkins-hbase4:41151] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/bf469a2d-0740-951c-0d71-e8a7443854ab/WALs/jenkins-hbase4.apache.org,41151,1690009861150/jenkins-hbase4.apache.org%2C41151%2C1690009861150.1690009861460 2023-07-22 07:11:01,486 DEBUG [RS:3;jenkins-hbase4:41151] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:40325,DS-569084ad-ab2f-4422-bbd3-3b09e4616e87,DISK], DatanodeInfoWithStorage[127.0.0.1:46471,DS-54862990-c4ed-4f54-b35e-aa588a06880e,DISK], DatanodeInfoWithStorage[127.0.0.1:43305,DS-eba9c6db-5917-47d2-ac38-49a4525b7417,DISK]] 2023-07-22 07:11:01,487 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39207] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:39207] to rsgroup master 2023-07-22 07:11:01,487 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39207] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:39207 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-22 07:11:01,487 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39207] ipc.CallRunner(144): callId: 48 service: MasterService methodName: ExecMasterService size: 118 connection: 172.31.14.131:34718 deadline: 1690011061487, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:39207 is either offline or it does not exist. 2023-07-22 07:11:01,487 WARN [Listener at localhost/44075] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:39207 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:39207 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-22 07:11:01,488 INFO [Listener at localhost/44075] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-22 07:11:01,489 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39207] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-22 07:11:01,489 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39207] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-22 07:11:01,489 INFO [Listener at localhost/44075] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:34227, jenkins-hbase4.apache.org:38441, jenkins-hbase4.apache.org:41151, jenkins-hbase4.apache.org:44257], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-22 07:11:01,490 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39207] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-22 07:11:01,490 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39207] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-22 07:11:01,491 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39207] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 't1', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'cf1', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-22 07:11:01,492 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39207] procedure2.ProcedureExecutor(1029): Stored pid=12, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=t1 2023-07-22 07:11:01,493 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=t1 execute state=CREATE_TABLE_PRE_OPERATION 2023-07-22 07:11:01,494 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39207] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "t1" procId is: 12 2023-07-22 07:11:01,494 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39207] master.MasterRpcServices(1230): Checking to see if procedure is done pid=12 2023-07-22 07:11:01,495 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-22 07:11:01,495 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-22 07:11:01,496 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-22 07:11:01,497 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=t1 execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-22 07:11:01,498 DEBUG [HFileArchiver-5] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:45267/user/jenkins/test-data/bf469a2d-0740-951c-0d71-e8a7443854ab/.tmp/data/default/t1/e94573fd5d07b7e0909ebec0feadf291 2023-07-22 07:11:01,499 DEBUG [HFileArchiver-5] backup.HFileArchiver(153): Directory hdfs://localhost:45267/user/jenkins/test-data/bf469a2d-0740-951c-0d71-e8a7443854ab/.tmp/data/default/t1/e94573fd5d07b7e0909ebec0feadf291 empty. 2023-07-22 07:11:01,499 DEBUG [HFileArchiver-5] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:45267/user/jenkins/test-data/bf469a2d-0740-951c-0d71-e8a7443854ab/.tmp/data/default/t1/e94573fd5d07b7e0909ebec0feadf291 2023-07-22 07:11:01,499 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(328): Archived t1 regions 2023-07-22 07:11:01,513 DEBUG [PEWorker-5] util.FSTableDescriptors(570): Wrote into hdfs://localhost:45267/user/jenkins/test-data/bf469a2d-0740-951c-0d71-e8a7443854ab/.tmp/data/default/t1/.tabledesc/.tableinfo.0000000001 2023-07-22 07:11:01,514 INFO [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(7675): creating {ENCODED => e94573fd5d07b7e0909ebec0feadf291, NAME => 't1,,1690009861491.e94573fd5d07b7e0909ebec0feadf291.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='t1', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'cf1', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:45267/user/jenkins/test-data/bf469a2d-0740-951c-0d71-e8a7443854ab/.tmp 2023-07-22 07:11:01,522 DEBUG [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(866): Instantiated t1,,1690009861491.e94573fd5d07b7e0909ebec0feadf291.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-22 07:11:01,522 DEBUG [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(1604): Closing e94573fd5d07b7e0909ebec0feadf291, disabling compactions & flushes 2023-07-22 07:11:01,522 INFO [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(1626): Closing region t1,,1690009861491.e94573fd5d07b7e0909ebec0feadf291. 2023-07-22 07:11:01,522 DEBUG [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on t1,,1690009861491.e94573fd5d07b7e0909ebec0feadf291. 2023-07-22 07:11:01,522 DEBUG [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(1714): Acquired close lock on t1,,1690009861491.e94573fd5d07b7e0909ebec0feadf291. after waiting 0 ms 2023-07-22 07:11:01,522 DEBUG [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(1724): Updates disabled for region t1,,1690009861491.e94573fd5d07b7e0909ebec0feadf291. 2023-07-22 07:11:01,522 INFO [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(1838): Closed t1,,1690009861491.e94573fd5d07b7e0909ebec0feadf291. 2023-07-22 07:11:01,522 DEBUG [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(1558): Region close journal for e94573fd5d07b7e0909ebec0feadf291: 2023-07-22 07:11:01,524 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=t1 execute state=CREATE_TABLE_ADD_TO_META 2023-07-22 07:11:01,525 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"t1,,1690009861491.e94573fd5d07b7e0909ebec0feadf291.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1690009861524"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690009861524"}]},"ts":"1690009861524"} 2023-07-22 07:11:01,526 INFO [PEWorker-5] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-22 07:11:01,526 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=t1 execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-22 07:11:01,526 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"t1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690009861526"}]},"ts":"1690009861526"} 2023-07-22 07:11:01,527 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=t1, state=ENABLING in hbase:meta 2023-07-22 07:11:01,531 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-22 07:11:01,531 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-22 07:11:01,531 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-22 07:11:01,531 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-22 07:11:01,531 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 3 is on host 0 2023-07-22 07:11:01,531 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-22 07:11:01,531 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=13, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=t1, region=e94573fd5d07b7e0909ebec0feadf291, ASSIGN}] 2023-07-22 07:11:01,532 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=13, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=t1, region=e94573fd5d07b7e0909ebec0feadf291, ASSIGN 2023-07-22 07:11:01,533 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=13, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=t1, region=e94573fd5d07b7e0909ebec0feadf291, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,38441,1690009859417; forceNewPlan=false, retain=false 2023-07-22 07:11:01,595 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39207] master.MasterRpcServices(1230): Checking to see if procedure is done pid=12 2023-07-22 07:11:01,683 INFO [jenkins-hbase4:39207] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-22 07:11:01,684 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=13 updating hbase:meta row=e94573fd5d07b7e0909ebec0feadf291, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,38441,1690009859417 2023-07-22 07:11:01,685 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"t1,,1690009861491.e94573fd5d07b7e0909ebec0feadf291.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1690009861684"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690009861684"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690009861684"}]},"ts":"1690009861684"} 2023-07-22 07:11:01,686 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=14, ppid=13, state=RUNNABLE; OpenRegionProcedure e94573fd5d07b7e0909ebec0feadf291, server=jenkins-hbase4.apache.org,38441,1690009859417}] 2023-07-22 07:11:01,796 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39207] master.MasterRpcServices(1230): Checking to see if procedure is done pid=12 2023-07-22 07:11:01,841 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open t1,,1690009861491.e94573fd5d07b7e0909ebec0feadf291. 2023-07-22 07:11:01,841 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => e94573fd5d07b7e0909ebec0feadf291, NAME => 't1,,1690009861491.e94573fd5d07b7e0909ebec0feadf291.', STARTKEY => '', ENDKEY => ''} 2023-07-22 07:11:01,841 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table t1 e94573fd5d07b7e0909ebec0feadf291 2023-07-22 07:11:01,841 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated t1,,1690009861491.e94573fd5d07b7e0909ebec0feadf291.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-22 07:11:01,841 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for e94573fd5d07b7e0909ebec0feadf291 2023-07-22 07:11:01,841 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for e94573fd5d07b7e0909ebec0feadf291 2023-07-22 07:11:01,843 INFO [StoreOpener-e94573fd5d07b7e0909ebec0feadf291-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family cf1 of region e94573fd5d07b7e0909ebec0feadf291 2023-07-22 07:11:01,844 DEBUG [StoreOpener-e94573fd5d07b7e0909ebec0feadf291-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:45267/user/jenkins/test-data/bf469a2d-0740-951c-0d71-e8a7443854ab/data/default/t1/e94573fd5d07b7e0909ebec0feadf291/cf1 2023-07-22 07:11:01,844 DEBUG [StoreOpener-e94573fd5d07b7e0909ebec0feadf291-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:45267/user/jenkins/test-data/bf469a2d-0740-951c-0d71-e8a7443854ab/data/default/t1/e94573fd5d07b7e0909ebec0feadf291/cf1 2023-07-22 07:11:01,844 INFO [StoreOpener-e94573fd5d07b7e0909ebec0feadf291-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region e94573fd5d07b7e0909ebec0feadf291 columnFamilyName cf1 2023-07-22 07:11:01,845 INFO [StoreOpener-e94573fd5d07b7e0909ebec0feadf291-1] regionserver.HStore(310): Store=e94573fd5d07b7e0909ebec0feadf291/cf1, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-22 07:11:01,845 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:45267/user/jenkins/test-data/bf469a2d-0740-951c-0d71-e8a7443854ab/data/default/t1/e94573fd5d07b7e0909ebec0feadf291 2023-07-22 07:11:01,846 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:45267/user/jenkins/test-data/bf469a2d-0740-951c-0d71-e8a7443854ab/data/default/t1/e94573fd5d07b7e0909ebec0feadf291 2023-07-22 07:11:01,848 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for e94573fd5d07b7e0909ebec0feadf291 2023-07-22 07:11:01,850 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:45267/user/jenkins/test-data/bf469a2d-0740-951c-0d71-e8a7443854ab/data/default/t1/e94573fd5d07b7e0909ebec0feadf291/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-22 07:11:01,850 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened e94573fd5d07b7e0909ebec0feadf291; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11743936960, jitterRate=0.09373936057090759}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-22 07:11:01,850 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for e94573fd5d07b7e0909ebec0feadf291: 2023-07-22 07:11:01,851 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for t1,,1690009861491.e94573fd5d07b7e0909ebec0feadf291., pid=14, masterSystemTime=1690009861837 2023-07-22 07:11:01,852 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for t1,,1690009861491.e94573fd5d07b7e0909ebec0feadf291. 2023-07-22 07:11:01,852 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened t1,,1690009861491.e94573fd5d07b7e0909ebec0feadf291. 2023-07-22 07:11:01,853 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=13 updating hbase:meta row=e94573fd5d07b7e0909ebec0feadf291, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,38441,1690009859417 2023-07-22 07:11:01,853 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"t1,,1690009861491.e94573fd5d07b7e0909ebec0feadf291.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1690009861852"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690009861852"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690009861852"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690009861852"}]},"ts":"1690009861852"} 2023-07-22 07:11:01,858 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=14, resume processing ppid=13 2023-07-22 07:11:01,858 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=14, ppid=13, state=SUCCESS; OpenRegionProcedure e94573fd5d07b7e0909ebec0feadf291, server=jenkins-hbase4.apache.org,38441,1690009859417 in 168 msec 2023-07-22 07:11:01,859 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=13, resume processing ppid=12 2023-07-22 07:11:01,860 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=13, ppid=12, state=SUCCESS; TransitRegionStateProcedure table=t1, region=e94573fd5d07b7e0909ebec0feadf291, ASSIGN in 327 msec 2023-07-22 07:11:01,860 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=t1 execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-22 07:11:01,860 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"t1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690009861860"}]},"ts":"1690009861860"} 2023-07-22 07:11:01,861 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=t1, state=ENABLED in hbase:meta 2023-07-22 07:11:01,863 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=t1 execute state=CREATE_TABLE_POST_OPERATION 2023-07-22 07:11:01,864 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=12, state=SUCCESS; CreateTableProcedure table=t1 in 372 msec 2023-07-22 07:11:02,098 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39207] master.MasterRpcServices(1230): Checking to see if procedure is done pid=12 2023-07-22 07:11:02,098 INFO [Listener at localhost/44075] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:t1, procId: 12 completed 2023-07-22 07:11:02,098 DEBUG [Listener at localhost/44075] hbase.HBaseTestingUtility(3430): Waiting until all regions of table t1 get assigned. Timeout = 60000ms 2023-07-22 07:11:02,098 INFO [Listener at localhost/44075] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-22 07:11:02,100 INFO [Listener at localhost/44075] hbase.HBaseTestingUtility(3484): All regions for table t1 assigned to meta. Checking AM states. 2023-07-22 07:11:02,100 INFO [Listener at localhost/44075] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-22 07:11:02,100 INFO [Listener at localhost/44075] hbase.HBaseTestingUtility(3504): All regions for table t1 assigned. 2023-07-22 07:11:02,102 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39207] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 't1', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'cf1', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-22 07:11:02,103 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39207] procedure2.ProcedureExecutor(1029): Stored pid=15, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=t1 2023-07-22 07:11:02,105 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=15, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=t1 execute state=CREATE_TABLE_PRE_OPERATION 2023-07-22 07:11:02,105 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39207] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.TableExistsException: t1 at org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.prepareCreate(CreateTableProcedure.java:243) at org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.executeFromState(CreateTableProcedure.java:85) at org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.executeFromState(CreateTableProcedure.java:53) at org.apache.hadoop.hbase.procedure2.StateMachineProcedure.execute(StateMachineProcedure.java:188) at org.apache.hadoop.hbase.procedure2.Procedure.doExecute(Procedure.java:922) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execProcedure(ProcedureExecutor.java:1646) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.executeProcedure(ProcedureExecutor.java:1392) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.access$1100(ProcedureExecutor.java:73) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor$WorkerThread.run(ProcedureExecutor.java:1964) 2023-07-22 07:11:02,106 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39207] ipc.CallRunner(144): callId: 65 service: MasterService methodName: CreateTable size: 353 connection: 172.31.14.131:34718 deadline: 1690009922102, exception=org.apache.hadoop.hbase.TableExistsException: t1 2023-07-22 07:11:02,107 INFO [Listener at localhost/44075] hbase.Waiter(180): Waiting up to [5,000] milli-secs(wait.for.ratio=[1]) 2023-07-22 07:11:02,108 INFO [PEWorker-1] procedure2.ProcedureExecutor(1528): Rolled back pid=15, state=ROLLEDBACK, exception=org.apache.hadoop.hbase.TableExistsException via master-create-table:org.apache.hadoop.hbase.TableExistsException: t1; CreateTableProcedure table=t1 exec-time=6 msec 2023-07-22 07:11:02,208 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39207] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-22 07:11:02,208 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39207] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-22 07:11:02,209 INFO [Listener at localhost/44075] client.HBaseAdmin$15(890): Started disable of t1 2023-07-22 07:11:02,210 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39207] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable t1 2023-07-22 07:11:02,210 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39207] procedure2.ProcedureExecutor(1029): Stored pid=16, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=t1 2023-07-22 07:11:02,213 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39207] master.MasterRpcServices(1230): Checking to see if procedure is done pid=16 2023-07-22 07:11:02,213 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"t1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690009862213"}]},"ts":"1690009862213"} 2023-07-22 07:11:02,214 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=t1, state=DISABLING in hbase:meta 2023-07-22 07:11:02,216 INFO [PEWorker-2] procedure.DisableTableProcedure(293): Set t1 to state=DISABLING 2023-07-22 07:11:02,217 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=17, ppid=16, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=t1, region=e94573fd5d07b7e0909ebec0feadf291, UNASSIGN}] 2023-07-22 07:11:02,218 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=17, ppid=16, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=t1, region=e94573fd5d07b7e0909ebec0feadf291, UNASSIGN 2023-07-22 07:11:02,218 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=17 updating hbase:meta row=e94573fd5d07b7e0909ebec0feadf291, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,38441,1690009859417 2023-07-22 07:11:02,219 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"t1,,1690009861491.e94573fd5d07b7e0909ebec0feadf291.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1690009862218"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690009862218"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690009862218"}]},"ts":"1690009862218"} 2023-07-22 07:11:02,220 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=18, ppid=17, state=RUNNABLE; CloseRegionProcedure e94573fd5d07b7e0909ebec0feadf291, server=jenkins-hbase4.apache.org,38441,1690009859417}] 2023-07-22 07:11:02,314 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39207] master.MasterRpcServices(1230): Checking to see if procedure is done pid=16 2023-07-22 07:11:02,372 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close e94573fd5d07b7e0909ebec0feadf291 2023-07-22 07:11:02,375 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing e94573fd5d07b7e0909ebec0feadf291, disabling compactions & flushes 2023-07-22 07:11:02,375 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region t1,,1690009861491.e94573fd5d07b7e0909ebec0feadf291. 2023-07-22 07:11:02,375 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on t1,,1690009861491.e94573fd5d07b7e0909ebec0feadf291. 2023-07-22 07:11:02,375 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on t1,,1690009861491.e94573fd5d07b7e0909ebec0feadf291. after waiting 0 ms 2023-07-22 07:11:02,375 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region t1,,1690009861491.e94573fd5d07b7e0909ebec0feadf291. 2023-07-22 07:11:02,379 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:45267/user/jenkins/test-data/bf469a2d-0740-951c-0d71-e8a7443854ab/data/default/t1/e94573fd5d07b7e0909ebec0feadf291/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-22 07:11:02,380 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed t1,,1690009861491.e94573fd5d07b7e0909ebec0feadf291. 2023-07-22 07:11:02,380 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for e94573fd5d07b7e0909ebec0feadf291: 2023-07-22 07:11:02,381 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed e94573fd5d07b7e0909ebec0feadf291 2023-07-22 07:11:02,381 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=17 updating hbase:meta row=e94573fd5d07b7e0909ebec0feadf291, regionState=CLOSED 2023-07-22 07:11:02,382 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"t1,,1690009861491.e94573fd5d07b7e0909ebec0feadf291.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1690009862381"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690009862381"}]},"ts":"1690009862381"} 2023-07-22 07:11:02,384 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=18, resume processing ppid=17 2023-07-22 07:11:02,385 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=18, ppid=17, state=SUCCESS; CloseRegionProcedure e94573fd5d07b7e0909ebec0feadf291, server=jenkins-hbase4.apache.org,38441,1690009859417 in 163 msec 2023-07-22 07:11:02,386 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=17, resume processing ppid=16 2023-07-22 07:11:02,386 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=17, ppid=16, state=SUCCESS; TransitRegionStateProcedure table=t1, region=e94573fd5d07b7e0909ebec0feadf291, UNASSIGN in 168 msec 2023-07-22 07:11:02,387 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"t1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690009862387"}]},"ts":"1690009862387"} 2023-07-22 07:11:02,388 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=t1, state=DISABLED in hbase:meta 2023-07-22 07:11:02,391 INFO [PEWorker-2] procedure.DisableTableProcedure(305): Set t1 to state=DISABLED 2023-07-22 07:11:02,392 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=16, state=SUCCESS; DisableTableProcedure table=t1 in 181 msec 2023-07-22 07:11:02,516 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39207] master.MasterRpcServices(1230): Checking to see if procedure is done pid=16 2023-07-22 07:11:02,516 INFO [Listener at localhost/44075] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:t1, procId: 16 completed 2023-07-22 07:11:02,517 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39207] master.HMaster$5(2228): Client=jenkins//172.31.14.131 delete t1 2023-07-22 07:11:02,517 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39207] procedure2.ProcedureExecutor(1029): Stored pid=19, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=t1 2023-07-22 07:11:02,519 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=19, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=t1 2023-07-22 07:11:02,519 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39207] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 't1' from rsgroup 'default' 2023-07-22 07:11:02,520 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=19, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=t1 2023-07-22 07:11:02,521 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39207] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-22 07:11:02,522 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39207] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-22 07:11:02,522 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39207] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-22 07:11:02,523 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:45267/user/jenkins/test-data/bf469a2d-0740-951c-0d71-e8a7443854ab/.tmp/data/default/t1/e94573fd5d07b7e0909ebec0feadf291 2023-07-22 07:11:02,524 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39207] master.MasterRpcServices(1230): Checking to see if procedure is done pid=19 2023-07-22 07:11:02,525 DEBUG [HFileArchiver-2] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:45267/user/jenkins/test-data/bf469a2d-0740-951c-0d71-e8a7443854ab/.tmp/data/default/t1/e94573fd5d07b7e0909ebec0feadf291/cf1, FileablePath, hdfs://localhost:45267/user/jenkins/test-data/bf469a2d-0740-951c-0d71-e8a7443854ab/.tmp/data/default/t1/e94573fd5d07b7e0909ebec0feadf291/recovered.edits] 2023-07-22 07:11:02,530 DEBUG [HFileArchiver-2] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:45267/user/jenkins/test-data/bf469a2d-0740-951c-0d71-e8a7443854ab/.tmp/data/default/t1/e94573fd5d07b7e0909ebec0feadf291/recovered.edits/4.seqid to hdfs://localhost:45267/user/jenkins/test-data/bf469a2d-0740-951c-0d71-e8a7443854ab/archive/data/default/t1/e94573fd5d07b7e0909ebec0feadf291/recovered.edits/4.seqid 2023-07-22 07:11:02,531 DEBUG [HFileArchiver-2] backup.HFileArchiver(596): Deleted hdfs://localhost:45267/user/jenkins/test-data/bf469a2d-0740-951c-0d71-e8a7443854ab/.tmp/data/default/t1/e94573fd5d07b7e0909ebec0feadf291 2023-07-22 07:11:02,531 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(328): Archived t1 regions 2023-07-22 07:11:02,533 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=19, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=t1 2023-07-22 07:11:02,535 WARN [PEWorker-4] procedure.DeleteTableProcedure(384): Deleting some vestigial 1 rows of t1 from hbase:meta 2023-07-22 07:11:02,536 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(421): Removing 't1' descriptor. 2023-07-22 07:11:02,537 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=19, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=t1 2023-07-22 07:11:02,537 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(411): Removing 't1' from region states. 2023-07-22 07:11:02,538 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"t1,,1690009861491.e94573fd5d07b7e0909ebec0feadf291.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1690009862537"}]},"ts":"9223372036854775807"} 2023-07-22 07:11:02,539 INFO [PEWorker-4] hbase.MetaTableAccessor(1788): Deleted 1 regions from META 2023-07-22 07:11:02,539 DEBUG [PEWorker-4] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => e94573fd5d07b7e0909ebec0feadf291, NAME => 't1,,1690009861491.e94573fd5d07b7e0909ebec0feadf291.', STARTKEY => '', ENDKEY => ''}] 2023-07-22 07:11:02,539 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(415): Marking 't1' as deleted. 2023-07-22 07:11:02,539 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"t1","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1690009862539"}]},"ts":"9223372036854775807"} 2023-07-22 07:11:02,540 INFO [PEWorker-4] hbase.MetaTableAccessor(1658): Deleted table t1 state from META 2023-07-22 07:11:02,542 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(130): Finished pid=19, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=t1 2023-07-22 07:11:02,543 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=19, state=SUCCESS; DeleteTableProcedure table=t1 in 25 msec 2023-07-22 07:11:02,625 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39207] master.MasterRpcServices(1230): Checking to see if procedure is done pid=19 2023-07-22 07:11:02,625 INFO [Listener at localhost/44075] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: default:t1, procId: 19 completed 2023-07-22 07:11:02,628 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39207] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-22 07:11:02,628 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39207] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-22 07:11:02,629 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39207] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-22 07:11:02,629 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39207] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-22 07:11:02,629 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39207] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-22 07:11:02,630 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39207] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-22 07:11:02,630 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39207] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-22 07:11:02,631 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39207] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-22 07:11:02,633 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39207] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-22 07:11:02,634 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39207] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-22 07:11:02,639 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39207] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-22 07:11:02,641 INFO [Listener at localhost/44075] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-22 07:11:02,642 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39207] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-22 07:11:02,644 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39207] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-22 07:11:02,644 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39207] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-22 07:11:02,645 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39207] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-22 07:11:02,647 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39207] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-22 07:11:02,649 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39207] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-22 07:11:02,649 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39207] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-22 07:11:02,651 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39207] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:39207] to rsgroup master 2023-07-22 07:11:02,651 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39207] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:39207 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-22 07:11:02,651 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39207] ipc.CallRunner(144): callId: 105 service: MasterService methodName: ExecMasterService size: 118 connection: 172.31.14.131:34718 deadline: 1690011062651, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:39207 is either offline or it does not exist. 2023-07-22 07:11:02,651 WARN [Listener at localhost/44075] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:39207 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:39207 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-22 07:11:02,655 INFO [Listener at localhost/44075] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-22 07:11:02,655 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39207] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-22 07:11:02,656 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39207] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-22 07:11:02,656 INFO [Listener at localhost/44075] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:34227, jenkins-hbase4.apache.org:38441, jenkins-hbase4.apache.org:41151, jenkins-hbase4.apache.org:44257], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-22 07:11:02,656 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39207] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-22 07:11:02,656 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39207] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-22 07:11:02,679 INFO [Listener at localhost/44075] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testNotMoveTableToNullRSGroupWhenCreatingExistingTable Thread=567 (was 553) - Thread LEAK? -, OpenFileDescriptor=833 (was 816) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=366 (was 366), ProcessCount=180 (was 180), AvailableMemoryMB=6385 (was 6413) 2023-07-22 07:11:02,679 WARN [Listener at localhost/44075] hbase.ResourceChecker(130): Thread=567 is superior to 500 2023-07-22 07:11:02,700 INFO [Listener at localhost/44075] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testNonExistentTableMove Thread=567, OpenFileDescriptor=833, MaxFileDescriptor=60000, SystemLoadAverage=366, ProcessCount=180, AvailableMemoryMB=6382 2023-07-22 07:11:02,700 WARN [Listener at localhost/44075] hbase.ResourceChecker(130): Thread=567 is superior to 500 2023-07-22 07:11:02,700 INFO [Listener at localhost/44075] rsgroup.TestRSGroupsBase(132): testNonExistentTableMove 2023-07-22 07:11:02,704 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39207] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-22 07:11:02,704 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39207] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-22 07:11:02,705 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39207] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-22 07:11:02,706 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39207] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-22 07:11:02,706 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39207] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-22 07:11:02,707 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39207] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-22 07:11:02,707 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39207] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-22 07:11:02,707 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39207] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-22 07:11:02,713 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39207] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-22 07:11:02,714 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39207] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-22 07:11:02,715 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39207] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-22 07:11:02,718 INFO [Listener at localhost/44075] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-22 07:11:02,719 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39207] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-22 07:11:02,721 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39207] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-22 07:11:02,721 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39207] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-22 07:11:02,724 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39207] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-22 07:11:02,725 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39207] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-22 07:11:02,727 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39207] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-22 07:11:02,727 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39207] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-22 07:11:02,729 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39207] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:39207] to rsgroup master 2023-07-22 07:11:02,730 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39207] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:39207 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-22 07:11:02,730 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39207] ipc.CallRunner(144): callId: 133 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:34718 deadline: 1690011062729, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:39207 is either offline or it does not exist. 2023-07-22 07:11:02,730 WARN [Listener at localhost/44075] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:39207 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:39207 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-22 07:11:02,732 INFO [Listener at localhost/44075] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-22 07:11:02,733 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39207] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-22 07:11:02,733 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39207] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-22 07:11:02,733 INFO [Listener at localhost/44075] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:34227, jenkins-hbase4.apache.org:38441, jenkins-hbase4.apache.org:41151, jenkins-hbase4.apache.org:44257], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-22 07:11:02,734 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39207] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-22 07:11:02,734 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39207] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-22 07:11:02,735 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39207] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=GrouptestNonExistentTableMove 2023-07-22 07:11:02,735 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39207] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-22 07:11:02,736 INFO [Listener at localhost/44075] rsgroup.TestRSGroupsAdmin1(389): Moving table GrouptestNonExistentTableMove to default 2023-07-22 07:11:02,742 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39207] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=GrouptestNonExistentTableMove 2023-07-22 07:11:02,742 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39207] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-22 07:11:02,745 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39207] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-22 07:11:02,745 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39207] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-22 07:11:02,746 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39207] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-22 07:11:02,746 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39207] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-22 07:11:02,746 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39207] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-22 07:11:02,747 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39207] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-22 07:11:02,747 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39207] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-22 07:11:02,747 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39207] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-22 07:11:02,750 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39207] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-22 07:11:02,751 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39207] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-22 07:11:02,752 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39207] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-22 07:11:02,754 INFO [Listener at localhost/44075] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-22 07:11:02,755 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39207] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-22 07:11:02,757 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39207] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-22 07:11:02,757 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39207] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-22 07:11:02,758 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39207] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-22 07:11:02,761 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39207] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-22 07:11:02,763 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39207] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-22 07:11:02,763 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39207] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-22 07:11:02,765 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39207] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:39207] to rsgroup master 2023-07-22 07:11:02,765 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39207] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:39207 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-22 07:11:02,765 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39207] ipc.CallRunner(144): callId: 168 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:34718 deadline: 1690011062765, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:39207 is either offline or it does not exist. 2023-07-22 07:11:02,766 WARN [Listener at localhost/44075] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:39207 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:39207 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-22 07:11:02,767 INFO [Listener at localhost/44075] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-22 07:11:02,768 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39207] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-22 07:11:02,768 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39207] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-22 07:11:02,768 INFO [Listener at localhost/44075] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:34227, jenkins-hbase4.apache.org:38441, jenkins-hbase4.apache.org:41151, jenkins-hbase4.apache.org:44257], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-22 07:11:02,769 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39207] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-22 07:11:02,769 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39207] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-22 07:11:02,789 INFO [Listener at localhost/44075] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testNonExistentTableMove Thread=569 (was 567) - Thread LEAK? -, OpenFileDescriptor=833 (was 833), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=366 (was 366), ProcessCount=180 (was 180), AvailableMemoryMB=6382 (was 6382) 2023-07-22 07:11:02,790 WARN [Listener at localhost/44075] hbase.ResourceChecker(130): Thread=569 is superior to 500 2023-07-22 07:11:02,807 INFO [Listener at localhost/44075] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testGroupInfoMultiAccessing Thread=569, OpenFileDescriptor=833, MaxFileDescriptor=60000, SystemLoadAverage=366, ProcessCount=180, AvailableMemoryMB=6381 2023-07-22 07:11:02,807 WARN [Listener at localhost/44075] hbase.ResourceChecker(130): Thread=569 is superior to 500 2023-07-22 07:11:02,808 INFO [Listener at localhost/44075] rsgroup.TestRSGroupsBase(132): testGroupInfoMultiAccessing 2023-07-22 07:11:02,811 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39207] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-22 07:11:02,811 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39207] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-22 07:11:02,812 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39207] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-22 07:11:02,812 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39207] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-22 07:11:02,812 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39207] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-22 07:11:02,812 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39207] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-22 07:11:02,813 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39207] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-22 07:11:02,813 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39207] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-22 07:11:02,816 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39207] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-22 07:11:02,816 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39207] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-22 07:11:02,818 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39207] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-22 07:11:02,820 INFO [Listener at localhost/44075] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-22 07:11:02,820 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39207] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-22 07:11:02,822 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39207] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-22 07:11:02,823 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39207] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-22 07:11:02,824 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39207] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-22 07:11:02,825 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39207] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-22 07:11:02,828 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39207] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-22 07:11:02,828 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39207] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-22 07:11:02,830 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39207] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:39207] to rsgroup master 2023-07-22 07:11:02,830 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39207] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:39207 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-22 07:11:02,830 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39207] ipc.CallRunner(144): callId: 196 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:34718 deadline: 1690011062830, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:39207 is either offline or it does not exist. 2023-07-22 07:11:02,831 WARN [Listener at localhost/44075] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:39207 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:39207 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-22 07:11:02,833 INFO [Listener at localhost/44075] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-22 07:11:02,833 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39207] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-22 07:11:02,833 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39207] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-22 07:11:02,834 INFO [Listener at localhost/44075] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:34227, jenkins-hbase4.apache.org:38441, jenkins-hbase4.apache.org:41151, jenkins-hbase4.apache.org:44257], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-22 07:11:02,834 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39207] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-22 07:11:02,835 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39207] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-22 07:11:02,838 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39207] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-22 07:11:02,839 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39207] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-22 07:11:02,839 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39207] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-22 07:11:02,839 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39207] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-22 07:11:02,839 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39207] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-22 07:11:02,840 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39207] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-22 07:11:02,840 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39207] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-22 07:11:02,841 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39207] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-22 07:11:02,844 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39207] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-22 07:11:02,845 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39207] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-22 07:11:02,855 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39207] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-22 07:11:02,858 INFO [Listener at localhost/44075] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-22 07:11:02,859 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39207] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-22 07:11:02,861 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39207] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-22 07:11:02,861 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39207] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-22 07:11:02,863 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39207] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-22 07:11:02,865 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39207] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-22 07:11:02,867 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39207] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-22 07:11:02,867 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39207] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-22 07:11:02,870 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39207] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:39207] to rsgroup master 2023-07-22 07:11:02,870 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39207] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:39207 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-22 07:11:02,870 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39207] ipc.CallRunner(144): callId: 224 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:34718 deadline: 1690011062870, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:39207 is either offline or it does not exist. 2023-07-22 07:11:02,870 WARN [Listener at localhost/44075] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:39207 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:39207 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-22 07:11:02,873 INFO [Listener at localhost/44075] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-22 07:11:02,873 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39207] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-22 07:11:02,873 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39207] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-22 07:11:02,874 INFO [Listener at localhost/44075] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:34227, jenkins-hbase4.apache.org:38441, jenkins-hbase4.apache.org:41151, jenkins-hbase4.apache.org:44257], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-22 07:11:02,874 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39207] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-22 07:11:02,875 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39207] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-22 07:11:02,924 INFO [Listener at localhost/44075] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testGroupInfoMultiAccessing Thread=570 (was 569) - Thread LEAK? -, OpenFileDescriptor=833 (was 833), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=366 (was 366), ProcessCount=180 (was 180), AvailableMemoryMB=6363 (was 6381) 2023-07-22 07:11:02,924 WARN [Listener at localhost/44075] hbase.ResourceChecker(130): Thread=570 is superior to 500 2023-07-22 07:11:02,945 INFO [Listener at localhost/44075] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testNamespaceConstraint Thread=570, OpenFileDescriptor=833, MaxFileDescriptor=60000, SystemLoadAverage=366, ProcessCount=180, AvailableMemoryMB=6359 2023-07-22 07:11:02,945 WARN [Listener at localhost/44075] hbase.ResourceChecker(130): Thread=570 is superior to 500 2023-07-22 07:11:02,945 INFO [Listener at localhost/44075] rsgroup.TestRSGroupsBase(132): testNamespaceConstraint 2023-07-22 07:11:02,949 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39207] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-22 07:11:02,949 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39207] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-22 07:11:02,949 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39207] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-22 07:11:02,950 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39207] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-22 07:11:02,950 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39207] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-22 07:11:02,950 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39207] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-22 07:11:02,950 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39207] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-22 07:11:02,951 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39207] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-22 07:11:02,955 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39207] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-22 07:11:02,955 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39207] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-22 07:11:02,956 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39207] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-22 07:11:02,959 INFO [Listener at localhost/44075] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-22 07:11:02,960 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39207] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-22 07:11:02,961 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39207] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-22 07:11:02,961 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39207] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-22 07:11:02,963 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39207] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-22 07:11:02,964 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39207] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-22 07:11:02,966 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39207] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-22 07:11:02,966 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39207] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-22 07:11:02,967 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39207] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:39207] to rsgroup master 2023-07-22 07:11:02,968 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39207] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:39207 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-22 07:11:02,968 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39207] ipc.CallRunner(144): callId: 252 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:34718 deadline: 1690011062967, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:39207 is either offline or it does not exist. 2023-07-22 07:11:02,968 WARN [Listener at localhost/44075] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:39207 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:39207 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-22 07:11:02,970 INFO [Listener at localhost/44075] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-22 07:11:02,970 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39207] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-22 07:11:02,970 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39207] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-22 07:11:02,970 INFO [Listener at localhost/44075] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:34227, jenkins-hbase4.apache.org:38441, jenkins-hbase4.apache.org:41151, jenkins-hbase4.apache.org:44257], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-22 07:11:02,971 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39207] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-22 07:11:02,971 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39207] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-22 07:11:02,971 INFO [Listener at localhost/44075] rsgroup.TestRSGroupsAdmin1(154): testNamespaceConstraint 2023-07-22 07:11:02,972 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39207] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup Group_foo 2023-07-22 07:11:02,974 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39207] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_foo 2023-07-22 07:11:02,976 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39207] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-22 07:11:02,977 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39207] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-22 07:11:02,977 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39207] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-22 07:11:02,978 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39207] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-22 07:11:02,980 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39207] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-22 07:11:02,980 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39207] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-22 07:11:02,982 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39207] master.HMaster$15(3014): Client=jenkins//172.31.14.131 creating {NAME => 'Group_foo', hbase.rsgroup.name => 'Group_foo'} 2023-07-22 07:11:02,983 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39207] procedure2.ProcedureExecutor(1029): Stored pid=20, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=Group_foo 2023-07-22 07:11:02,986 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39207] master.MasterRpcServices(1230): Checking to see if procedure is done pid=20 2023-07-22 07:11:02,990 DEBUG [Listener at localhost/44075-EventThread] zookeeper.ZKWatcher(600): master:39207-0x1018bde77940000, quorum=127.0.0.1:58374, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-22 07:11:02,993 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=20, state=SUCCESS; CreateNamespaceProcedure, namespace=Group_foo in 10 msec 2023-07-22 07:11:03,087 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39207] master.MasterRpcServices(1230): Checking to see if procedure is done pid=20 2023-07-22 07:11:03,088 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39207] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup Group_foo 2023-07-22 07:11:03,089 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39207] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup Group_foo is referenced by namespace: Group_foo at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.removeRSGroup(RSGroupAdminServer.java:504) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.removeRSGroup(RSGroupAdminEndpoint.java:278) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16208) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-22 07:11:03,089 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39207] ipc.CallRunner(144): callId: 268 service: MasterService methodName: ExecMasterService size: 91 connection: 172.31.14.131:34718 deadline: 1690011063088, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup Group_foo is referenced by namespace: Group_foo 2023-07-22 07:11:03,095 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39207] master.HMaster$16(3053): Client=jenkins//172.31.14.131 modify {NAME => 'Group_foo', hbase.rsgroup.name => 'Group_foo'} 2023-07-22 07:11:03,101 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39207] procedure2.ProcedureExecutor(1029): Stored pid=21, state=RUNNABLE:MODIFY_NAMESPACE_PREPARE; ModifyNamespaceProcedure, namespace=Group_foo 2023-07-22 07:11:03,107 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39207] master.MasterRpcServices(1230): Checking to see if procedure is done pid=21 2023-07-22 07:11:03,116 DEBUG [Listener at localhost/44075-EventThread] zookeeper.ZKWatcher(600): master:39207-0x1018bde77940000, quorum=127.0.0.1:58374, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/Group_foo 2023-07-22 07:11:03,117 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=21, state=SUCCESS; ModifyNamespaceProcedure, namespace=Group_foo in 20 msec 2023-07-22 07:11:03,208 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39207] master.MasterRpcServices(1230): Checking to see if procedure is done pid=21 2023-07-22 07:11:03,209 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39207] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup Group_anotherGroup 2023-07-22 07:11:03,211 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39207] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_anotherGroup 2023-07-22 07:11:03,212 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39207] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-22 07:11:03,213 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39207] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_foo 2023-07-22 07:11:03,213 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39207] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-22 07:11:03,213 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39207] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-22 07:11:03,224 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39207] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-22 07:11:03,227 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39207] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-22 07:11:03,227 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39207] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-22 07:11:03,229 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39207] master.HMaster$17(3086): Client=jenkins//172.31.14.131 delete Group_foo 2023-07-22 07:11:03,230 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39207] procedure2.ProcedureExecutor(1029): Stored pid=22, state=RUNNABLE:DELETE_NAMESPACE_PREPARE; DeleteNamespaceProcedure, namespace=Group_foo 2023-07-22 07:11:03,231 INFO [PEWorker-1] procedure.DeleteNamespaceProcedure(73): pid=22, state=RUNNABLE:DELETE_NAMESPACE_PREPARE, locked=true; DeleteNamespaceProcedure, namespace=Group_foo 2023-07-22 07:11:03,234 INFO [PEWorker-1] procedure.DeleteNamespaceProcedure(73): pid=22, state=RUNNABLE:DELETE_NAMESPACE_DELETE_FROM_NS_TABLE, locked=true; DeleteNamespaceProcedure, namespace=Group_foo 2023-07-22 07:11:03,234 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39207] master.MasterRpcServices(1230): Checking to see if procedure is done pid=22 2023-07-22 07:11:03,235 INFO [PEWorker-1] procedure.DeleteNamespaceProcedure(73): pid=22, state=RUNNABLE:DELETE_NAMESPACE_REMOVE_FROM_ZK, locked=true; DeleteNamespaceProcedure, namespace=Group_foo 2023-07-22 07:11:03,236 DEBUG [Listener at localhost/44075-EventThread] zookeeper.ZKWatcher(600): master:39207-0x1018bde77940000, quorum=127.0.0.1:58374, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/namespace/Group_foo 2023-07-22 07:11:03,237 DEBUG [Listener at localhost/44075-EventThread] zookeeper.ZKWatcher(600): master:39207-0x1018bde77940000, quorum=127.0.0.1:58374, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-22 07:11:03,237 INFO [PEWorker-1] procedure.DeleteNamespaceProcedure(73): pid=22, state=RUNNABLE:DELETE_NAMESPACE_DELETE_DIRECTORIES, locked=true; DeleteNamespaceProcedure, namespace=Group_foo 2023-07-22 07:11:03,239 INFO [PEWorker-1] procedure.DeleteNamespaceProcedure(73): pid=22, state=RUNNABLE:DELETE_NAMESPACE_REMOVE_NAMESPACE_QUOTA, locked=true; DeleteNamespaceProcedure, namespace=Group_foo 2023-07-22 07:11:03,240 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=22, state=SUCCESS; DeleteNamespaceProcedure, namespace=Group_foo in 10 msec 2023-07-22 07:11:03,335 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39207] master.MasterRpcServices(1230): Checking to see if procedure is done pid=22 2023-07-22 07:11:03,335 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39207] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup Group_foo 2023-07-22 07:11:03,339 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39207] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_anotherGroup 2023-07-22 07:11:03,340 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39207] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-22 07:11:03,340 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39207] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-22 07:11:03,340 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39207] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 7 2023-07-22 07:11:03,341 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39207] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-22 07:11:03,343 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39207] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-22 07:11:03,344 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39207] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-22 07:11:03,346 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39207] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Region server group foo does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint.preCreateNamespace(RSGroupAdminEndpoint.java:591) at org.apache.hadoop.hbase.master.MasterCoprocessorHost$1.call(MasterCoprocessorHost.java:222) at org.apache.hadoop.hbase.master.MasterCoprocessorHost$1.call(MasterCoprocessorHost.java:219) at org.apache.hadoop.hbase.coprocessor.CoprocessorHost$ObserverOperationWithoutResult.callObserver(CoprocessorHost.java:558) at org.apache.hadoop.hbase.coprocessor.CoprocessorHost.execOperation(CoprocessorHost.java:631) at org.apache.hadoop.hbase.master.MasterCoprocessorHost.preCreateNamespace(MasterCoprocessorHost.java:219) at org.apache.hadoop.hbase.master.HMaster$15.run(HMaster.java:3010) at org.apache.hadoop.hbase.master.procedure.MasterProcedureUtil.submitProcedure(MasterProcedureUtil.java:132) at org.apache.hadoop.hbase.master.HMaster.createNamespace(HMaster.java:3007) at org.apache.hadoop.hbase.master.MasterRpcServices.createNamespace(MasterRpcServices.java:684) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-22 07:11:03,346 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39207] ipc.CallRunner(144): callId: 290 service: MasterService methodName: CreateNamespace size: 70 connection: 172.31.14.131:34718 deadline: 1690009923346, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Region server group foo does not exist. 2023-07-22 07:11:03,349 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39207] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-22 07:11:03,350 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39207] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-22 07:11:03,350 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39207] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-22 07:11:03,350 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39207] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-22 07:11:03,350 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39207] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-22 07:11:03,351 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39207] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-22 07:11:03,351 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39207] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-22 07:11:03,352 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39207] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup Group_anotherGroup 2023-07-22 07:11:03,355 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39207] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-22 07:11:03,355 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39207] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-22 07:11:03,355 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39207] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-22 07:11:03,356 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39207] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-22 07:11:03,357 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39207] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-22 07:11:03,357 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39207] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-22 07:11:03,357 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39207] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-22 07:11:03,358 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39207] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-22 07:11:03,358 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39207] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-22 07:11:03,359 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39207] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-22 07:11:03,361 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39207] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-22 07:11:03,362 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39207] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-22 07:11:03,364 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39207] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-22 07:11:03,367 INFO [Listener at localhost/44075] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-22 07:11:03,367 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39207] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-22 07:11:03,369 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39207] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-22 07:11:03,369 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39207] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-22 07:11:03,371 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39207] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-22 07:11:03,372 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39207] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-22 07:11:03,374 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39207] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-22 07:11:03,374 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39207] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-22 07:11:03,375 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39207] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:39207] to rsgroup master 2023-07-22 07:11:03,376 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39207] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:39207 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-22 07:11:03,376 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39207] ipc.CallRunner(144): callId: 320 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:34718 deadline: 1690011063375, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:39207 is either offline or it does not exist. 2023-07-22 07:11:03,376 WARN [Listener at localhost/44075] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:39207 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:39207 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-22 07:11:03,378 INFO [Listener at localhost/44075] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-22 07:11:03,378 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39207] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-22 07:11:03,378 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39207] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-22 07:11:03,378 INFO [Listener at localhost/44075] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:34227, jenkins-hbase4.apache.org:38441, jenkins-hbase4.apache.org:41151, jenkins-hbase4.apache.org:44257], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-22 07:11:03,379 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39207] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-22 07:11:03,379 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39207] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-22 07:11:03,397 INFO [Listener at localhost/44075] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testNamespaceConstraint Thread=570 (was 570), OpenFileDescriptor=833 (was 833), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=366 (was 366), ProcessCount=180 (was 180), AvailableMemoryMB=6346 (was 6359) 2023-07-22 07:11:03,397 WARN [Listener at localhost/44075] hbase.ResourceChecker(130): Thread=570 is superior to 500 2023-07-22 07:11:03,397 INFO [Listener at localhost/44075] hbase.HBaseTestingUtility(1286): Shutting down minicluster 2023-07-22 07:11:03,397 INFO [Listener at localhost/44075] client.ConnectionImplementation(1979): Closing master protocol: MasterService 2023-07-22 07:11:03,397 DEBUG [Listener at localhost/44075] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x72f6fde0 to 127.0.0.1:58374 2023-07-22 07:11:03,397 DEBUG [Listener at localhost/44075] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-22 07:11:03,398 DEBUG [Listener at localhost/44075] util.JVMClusterUtil(237): Shutting down HBase Cluster 2023-07-22 07:11:03,398 DEBUG [Listener at localhost/44075] util.JVMClusterUtil(257): Found active master hash=801592451, stopped=false 2023-07-22 07:11:03,398 DEBUG [Listener at localhost/44075] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint 2023-07-22 07:11:03,398 DEBUG [Listener at localhost/44075] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver 2023-07-22 07:11:03,398 INFO [Listener at localhost/44075] master.ServerManager(901): Cluster shutdown requested of master=jenkins-hbase4.apache.org,39207,1690009859236 2023-07-22 07:11:03,400 DEBUG [Listener at localhost/44075-EventThread] zookeeper.ZKWatcher(600): master:39207-0x1018bde77940000, quorum=127.0.0.1:58374, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-22 07:11:03,400 DEBUG [Listener at localhost/44075-EventThread] zookeeper.ZKWatcher(600): regionserver:38441-0x1018bde77940001, quorum=127.0.0.1:58374, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-22 07:11:03,400 DEBUG [Listener at localhost/44075-EventThread] zookeeper.ZKWatcher(600): master:39207-0x1018bde77940000, quorum=127.0.0.1:58374, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-22 07:11:03,400 INFO [Listener at localhost/44075] procedure2.ProcedureExecutor(629): Stopping 2023-07-22 07:11:03,400 DEBUG [Listener at localhost/44075-EventThread] zookeeper.ZKWatcher(600): regionserver:41151-0x1018bde7794000b, quorum=127.0.0.1:58374, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-22 07:11:03,400 DEBUG [Listener at localhost/44075-EventThread] zookeeper.ZKWatcher(600): regionserver:34227-0x1018bde77940002, quorum=127.0.0.1:58374, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-22 07:11:03,400 DEBUG [Listener at localhost/44075-EventThread] zookeeper.ZKWatcher(600): regionserver:44257-0x1018bde77940003, quorum=127.0.0.1:58374, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-22 07:11:03,401 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:39207-0x1018bde77940000, quorum=127.0.0.1:58374, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-22 07:11:03,401 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:38441-0x1018bde77940001, quorum=127.0.0.1:58374, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-22 07:11:03,401 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:41151-0x1018bde7794000b, quorum=127.0.0.1:58374, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-22 07:11:03,401 INFO [RS:3;jenkins-hbase4:41151] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,41151,1690009861150' ***** 2023-07-22 07:11:03,401 INFO [RS:3;jenkins-hbase4:41151] regionserver.HRegionServer(2311): STOPPED: Exiting; cluster shutdown set and not carrying any regions 2023-07-22 07:11:03,401 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:44257-0x1018bde77940003, quorum=127.0.0.1:58374, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-22 07:11:03,401 DEBUG [Listener at localhost/44075] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x1a2a3278 to 127.0.0.1:58374 2023-07-22 07:11:03,401 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:34227-0x1018bde77940002, quorum=127.0.0.1:58374, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-22 07:11:03,402 DEBUG [Listener at localhost/44075] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-22 07:11:03,402 INFO [Listener at localhost/44075] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,38441,1690009859417' ***** 2023-07-22 07:11:03,402 INFO [Listener at localhost/44075] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-22 07:11:03,402 INFO [RS:0;jenkins-hbase4:38441] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-22 07:11:03,403 INFO [Listener at localhost/44075] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,34227,1690009859598' ***** 2023-07-22 07:11:03,405 INFO [Listener at localhost/44075] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-22 07:11:03,405 INFO [RS:3;jenkins-hbase4:41151] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-22 07:11:03,405 INFO [Listener at localhost/44075] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,44257,1690009859763' ***** 2023-07-22 07:11:03,405 INFO [Listener at localhost/44075] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-22 07:11:03,405 INFO [RS:1;jenkins-hbase4:34227] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-22 07:11:03,405 INFO [RS:2;jenkins-hbase4:44257] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-22 07:11:03,408 INFO [RS:0;jenkins-hbase4:38441] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@22643add{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-22 07:11:03,410 INFO [RS:0;jenkins-hbase4:38441] server.AbstractConnector(383): Stopped ServerConnector@57cc7d77{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-22 07:11:03,410 INFO [RS:3;jenkins-hbase4:41151] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@12ac4b0f{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-22 07:11:03,411 INFO [RS:3;jenkins-hbase4:41151] server.AbstractConnector(383): Stopped ServerConnector@49bb275c{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-22 07:11:03,411 INFO [RS:3;jenkins-hbase4:41151] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-22 07:11:03,410 INFO [RS:0;jenkins-hbase4:38441] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-22 07:11:03,411 INFO [RS:3;jenkins-hbase4:41151] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@6946bc81{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-22 07:11:03,410 INFO [RS:1;jenkins-hbase4:34227] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@4874feda{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-22 07:11:03,411 INFO [RS:2;jenkins-hbase4:44257] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@2972cbd7{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-22 07:11:03,412 INFO [RS:3;jenkins-hbase4:41151] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@5ed0e996{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/5a1f5817-4556-8293-71dc-0238ce857818/hadoop.log.dir/,STOPPED} 2023-07-22 07:11:03,412 INFO [RS:0;jenkins-hbase4:38441] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@6247dd23{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-22 07:11:03,413 INFO [RS:1;jenkins-hbase4:34227] server.AbstractConnector(383): Stopped ServerConnector@373e134e{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-22 07:11:03,414 INFO [RS:0;jenkins-hbase4:38441] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@1161f604{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/5a1f5817-4556-8293-71dc-0238ce857818/hadoop.log.dir/,STOPPED} 2023-07-22 07:11:03,414 INFO [RS:1;jenkins-hbase4:34227] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-22 07:11:03,414 INFO [RS:2;jenkins-hbase4:44257] server.AbstractConnector(383): Stopped ServerConnector@53d5e86c{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-22 07:11:03,414 INFO [RS:2;jenkins-hbase4:44257] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-22 07:11:03,415 INFO [RS:1;jenkins-hbase4:34227] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@7f99f0c1{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-22 07:11:03,415 INFO [RS:3;jenkins-hbase4:41151] regionserver.HeapMemoryManager(220): Stopping 2023-07-22 07:11:03,415 INFO [RS:0;jenkins-hbase4:38441] regionserver.HeapMemoryManager(220): Stopping 2023-07-22 07:11:03,415 INFO [RS:3;jenkins-hbase4:41151] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-22 07:11:03,416 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-22 07:11:03,416 INFO [RS:3;jenkins-hbase4:41151] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-22 07:11:03,416 INFO [RS:3;jenkins-hbase4:41151] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,41151,1690009861150 2023-07-22 07:11:03,416 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-22 07:11:03,416 INFO [RS:0;jenkins-hbase4:38441] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-22 07:11:03,416 INFO [RS:0;jenkins-hbase4:38441] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-22 07:11:03,416 INFO [RS:0;jenkins-hbase4:38441] regionserver.HRegionServer(3305): Received CLOSE for 7d65c4912cc14781701102c94fe87b18 2023-07-22 07:11:03,416 INFO [RS:0;jenkins-hbase4:38441] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,38441,1690009859417 2023-07-22 07:11:03,416 DEBUG [RS:0;jenkins-hbase4:38441] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x27b7efb5 to 127.0.0.1:58374 2023-07-22 07:11:03,416 DEBUG [RS:0;jenkins-hbase4:38441] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-22 07:11:03,416 INFO [RS:0;jenkins-hbase4:38441] regionserver.HRegionServer(1474): Waiting on 1 regions to close 2023-07-22 07:11:03,416 DEBUG [RS:0;jenkins-hbase4:38441] regionserver.HRegionServer(1478): Online Regions={7d65c4912cc14781701102c94fe87b18=hbase:namespace,,1690009860651.7d65c4912cc14781701102c94fe87b18.} 2023-07-22 07:11:03,416 INFO [RS:1;jenkins-hbase4:34227] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@1ada7561{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/5a1f5817-4556-8293-71dc-0238ce857818/hadoop.log.dir/,STOPPED} 2023-07-22 07:11:03,416 INFO [RS:2;jenkins-hbase4:44257] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@54a08fd6{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-22 07:11:03,417 DEBUG [RS:0;jenkins-hbase4:38441] regionserver.HRegionServer(1504): Waiting on 7d65c4912cc14781701102c94fe87b18 2023-07-22 07:11:03,416 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 7d65c4912cc14781701102c94fe87b18, disabling compactions & flushes 2023-07-22 07:11:03,416 DEBUG [RS:3;jenkins-hbase4:41151] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x665b7fd1 to 127.0.0.1:58374 2023-07-22 07:11:03,417 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1690009860651.7d65c4912cc14781701102c94fe87b18. 2023-07-22 07:11:03,417 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1690009860651.7d65c4912cc14781701102c94fe87b18. 2023-07-22 07:11:03,417 DEBUG [RS:3;jenkins-hbase4:41151] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-22 07:11:03,417 INFO [RS:1;jenkins-hbase4:34227] regionserver.HeapMemoryManager(220): Stopping 2023-07-22 07:11:03,417 INFO [RS:1;jenkins-hbase4:34227] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-22 07:11:03,417 INFO [RS:1;jenkins-hbase4:34227] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-22 07:11:03,417 INFO [RS:1;jenkins-hbase4:34227] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,34227,1690009859598 2023-07-22 07:11:03,417 INFO [RS:3;jenkins-hbase4:41151] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,41151,1690009861150; all regions closed. 2023-07-22 07:11:03,418 INFO [RS:2;jenkins-hbase4:44257] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@1f01ca24{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/5a1f5817-4556-8293-71dc-0238ce857818/hadoop.log.dir/,STOPPED} 2023-07-22 07:11:03,417 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1690009860651.7d65c4912cc14781701102c94fe87b18. after waiting 0 ms 2023-07-22 07:11:03,418 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1690009860651.7d65c4912cc14781701102c94fe87b18. 2023-07-22 07:11:03,418 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-22 07:11:03,417 DEBUG [RS:1;jenkins-hbase4:34227] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x7d0bec28 to 127.0.0.1:58374 2023-07-22 07:11:03,418 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 7d65c4912cc14781701102c94fe87b18 1/1 column families, dataSize=267 B heapSize=904 B 2023-07-22 07:11:03,419 DEBUG [RS:1;jenkins-hbase4:34227] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-22 07:11:03,419 INFO [RS:1;jenkins-hbase4:34227] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,34227,1690009859598; all regions closed. 2023-07-22 07:11:03,419 INFO [RS:2;jenkins-hbase4:44257] regionserver.HeapMemoryManager(220): Stopping 2023-07-22 07:11:03,419 INFO [RS:2;jenkins-hbase4:44257] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-22 07:11:03,420 INFO [RS:2;jenkins-hbase4:44257] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-22 07:11:03,420 INFO [RS:2;jenkins-hbase4:44257] regionserver.HRegionServer(3305): Received CLOSE for b245454cd52dc02c77d146e37c1c439c 2023-07-22 07:11:03,420 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-22 07:11:03,421 INFO [RS:2;jenkins-hbase4:44257] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,44257,1690009859763 2023-07-22 07:11:03,421 DEBUG [RS:2;jenkins-hbase4:44257] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x31d2ea36 to 127.0.0.1:58374 2023-07-22 07:11:03,422 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing b245454cd52dc02c77d146e37c1c439c, disabling compactions & flushes 2023-07-22 07:11:03,422 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1690009860709.b245454cd52dc02c77d146e37c1c439c. 2023-07-22 07:11:03,422 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1690009860709.b245454cd52dc02c77d146e37c1c439c. 2023-07-22 07:11:03,422 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1690009860709.b245454cd52dc02c77d146e37c1c439c. after waiting 0 ms 2023-07-22 07:11:03,422 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1690009860709.b245454cd52dc02c77d146e37c1c439c. 2023-07-22 07:11:03,422 DEBUG [RS:2;jenkins-hbase4:44257] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-22 07:11:03,422 INFO [RS:2;jenkins-hbase4:44257] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-22 07:11:03,422 INFO [RS:2;jenkins-hbase4:44257] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-22 07:11:03,422 INFO [RS:2;jenkins-hbase4:44257] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-22 07:11:03,422 INFO [RS:2;jenkins-hbase4:44257] regionserver.HRegionServer(3305): Received CLOSE for 1588230740 2023-07-22 07:11:03,422 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing b245454cd52dc02c77d146e37c1c439c 1/1 column families, dataSize=6.43 KB heapSize=10.63 KB 2023-07-22 07:11:03,423 INFO [RS:2;jenkins-hbase4:44257] regionserver.HRegionServer(1474): Waiting on 2 regions to close 2023-07-22 07:11:03,423 DEBUG [RS:2;jenkins-hbase4:44257] regionserver.HRegionServer(1478): Online Regions={b245454cd52dc02c77d146e37c1c439c=hbase:rsgroup,,1690009860709.b245454cd52dc02c77d146e37c1c439c., 1588230740=hbase:meta,,1.1588230740} 2023-07-22 07:11:03,423 DEBUG [RS:2;jenkins-hbase4:44257] regionserver.HRegionServer(1504): Waiting on 1588230740, b245454cd52dc02c77d146e37c1c439c 2023-07-22 07:11:03,424 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-22 07:11:03,424 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-22 07:11:03,424 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-22 07:11:03,424 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-22 07:11:03,424 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-22 07:11:03,424 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=4.51 KB heapSize=8.81 KB 2023-07-22 07:11:03,426 DEBUG [RS:3;jenkins-hbase4:41151] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/bf469a2d-0740-951c-0d71-e8a7443854ab/oldWALs 2023-07-22 07:11:03,426 INFO [RS:3;jenkins-hbase4:41151] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C41151%2C1690009861150:(num 1690009861460) 2023-07-22 07:11:03,426 DEBUG [RS:3;jenkins-hbase4:41151] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-22 07:11:03,426 INFO [RS:3;jenkins-hbase4:41151] regionserver.LeaseManager(133): Closed leases 2023-07-22 07:11:03,426 INFO [RS:3;jenkins-hbase4:41151] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-07-22 07:11:03,426 INFO [RS:3;jenkins-hbase4:41151] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-22 07:11:03,426 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-22 07:11:03,426 INFO [RS:3;jenkins-hbase4:41151] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-22 07:11:03,426 INFO [RS:3;jenkins-hbase4:41151] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-22 07:11:03,427 DEBUG [RS:1;jenkins-hbase4:34227] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/bf469a2d-0740-951c-0d71-e8a7443854ab/oldWALs 2023-07-22 07:11:03,427 INFO [RS:1;jenkins-hbase4:34227] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C34227%2C1690009859598:(num 1690009860381) 2023-07-22 07:11:03,427 DEBUG [RS:1;jenkins-hbase4:34227] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-22 07:11:03,428 INFO [RS:3;jenkins-hbase4:41151] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:41151 2023-07-22 07:11:03,433 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-22 07:11:03,435 INFO [RS:1;jenkins-hbase4:34227] regionserver.LeaseManager(133): Closed leases 2023-07-22 07:11:03,435 INFO [RS:1;jenkins-hbase4:34227] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-07-22 07:11:03,436 INFO [RS:1;jenkins-hbase4:34227] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-22 07:11:03,436 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-22 07:11:03,436 INFO [RS:1;jenkins-hbase4:34227] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-22 07:11:03,436 INFO [RS:1;jenkins-hbase4:34227] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-22 07:11:03,438 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(130): Registering adapter for the MetricRegistry: RegionServer,sub=Coprocessor.Region.CP_org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-22 07:11:03,438 INFO [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(134): Registering RegionServer,sub=Coprocessor.Region.CP_org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint Metrics about HBase RegionObservers 2023-07-22 07:11:03,438 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-22 07:11:03,440 INFO [RS:1;jenkins-hbase4:34227] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:34227 2023-07-22 07:11:03,447 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-22 07:11:03,456 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-22 07:11:03,464 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=267 B at sequenceid=9 (bloomFilter=true), to=hdfs://localhost:45267/user/jenkins/test-data/bf469a2d-0740-951c-0d71-e8a7443854ab/data/hbase/namespace/7d65c4912cc14781701102c94fe87b18/.tmp/info/2ba595401b5b4b09b1ddfe51221aad55 2023-07-22 07:11:03,464 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=6.43 KB at sequenceid=29 (bloomFilter=true), to=hdfs://localhost:45267/user/jenkins/test-data/bf469a2d-0740-951c-0d71-e8a7443854ab/data/hbase/rsgroup/b245454cd52dc02c77d146e37c1c439c/.tmp/m/f7c861e07fed4f61a93dd0dfad28a5a4 2023-07-22 07:11:03,471 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for f7c861e07fed4f61a93dd0dfad28a5a4 2023-07-22 07:11:03,472 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:45267/user/jenkins/test-data/bf469a2d-0740-951c-0d71-e8a7443854ab/data/hbase/rsgroup/b245454cd52dc02c77d146e37c1c439c/.tmp/m/f7c861e07fed4f61a93dd0dfad28a5a4 as hdfs://localhost:45267/user/jenkins/test-data/bf469a2d-0740-951c-0d71-e8a7443854ab/data/hbase/rsgroup/b245454cd52dc02c77d146e37c1c439c/m/f7c861e07fed4f61a93dd0dfad28a5a4 2023-07-22 07:11:03,480 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 2ba595401b5b4b09b1ddfe51221aad55 2023-07-22 07:11:03,485 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:45267/user/jenkins/test-data/bf469a2d-0740-951c-0d71-e8a7443854ab/data/hbase/namespace/7d65c4912cc14781701102c94fe87b18/.tmp/info/2ba595401b5b4b09b1ddfe51221aad55 as hdfs://localhost:45267/user/jenkins/test-data/bf469a2d-0740-951c-0d71-e8a7443854ab/data/hbase/namespace/7d65c4912cc14781701102c94fe87b18/info/2ba595401b5b4b09b1ddfe51221aad55 2023-07-22 07:11:03,487 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for f7c861e07fed4f61a93dd0dfad28a5a4 2023-07-22 07:11:03,487 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:45267/user/jenkins/test-data/bf469a2d-0740-951c-0d71-e8a7443854ab/data/hbase/rsgroup/b245454cd52dc02c77d146e37c1c439c/m/f7c861e07fed4f61a93dd0dfad28a5a4, entries=12, sequenceid=29, filesize=5.4 K 2023-07-22 07:11:03,488 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~6.43 KB/6586, heapSize ~10.61 KB/10864, currentSize=0 B/0 for b245454cd52dc02c77d146e37c1c439c in 66ms, sequenceid=29, compaction requested=false 2023-07-22 07:11:03,495 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=4.01 KB at sequenceid=26 (bloomFilter=false), to=hdfs://localhost:45267/user/jenkins/test-data/bf469a2d-0740-951c-0d71-e8a7443854ab/data/hbase/meta/1588230740/.tmp/info/ab80d17333be45f2a9b2ada1805bd2bd 2023-07-22 07:11:03,508 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 2ba595401b5b4b09b1ddfe51221aad55 2023-07-22 07:11:03,508 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:45267/user/jenkins/test-data/bf469a2d-0740-951c-0d71-e8a7443854ab/data/hbase/namespace/7d65c4912cc14781701102c94fe87b18/info/2ba595401b5b4b09b1ddfe51221aad55, entries=3, sequenceid=9, filesize=5.0 K 2023-07-22 07:11:03,510 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~267 B/267, heapSize ~888 B/888, currentSize=0 B/0 for 7d65c4912cc14781701102c94fe87b18 in 92ms, sequenceid=9, compaction requested=false 2023-07-22 07:11:03,516 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:45267/user/jenkins/test-data/bf469a2d-0740-951c-0d71-e8a7443854ab/data/hbase/rsgroup/b245454cd52dc02c77d146e37c1c439c/recovered.edits/32.seqid, newMaxSeqId=32, maxSeqId=1 2023-07-22 07:11:03,517 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-22 07:11:03,517 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1690009860709.b245454cd52dc02c77d146e37c1c439c. 2023-07-22 07:11:03,517 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for b245454cd52dc02c77d146e37c1c439c: 2023-07-22 07:11:03,517 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:rsgroup,,1690009860709.b245454cd52dc02c77d146e37c1c439c. 2023-07-22 07:11:03,526 DEBUG [Listener at localhost/44075-EventThread] zookeeper.ZKWatcher(600): regionserver:44257-0x1018bde77940003, quorum=127.0.0.1:58374, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,41151,1690009861150 2023-07-22 07:11:03,526 DEBUG [Listener at localhost/44075-EventThread] zookeeper.ZKWatcher(600): regionserver:34227-0x1018bde77940002, quorum=127.0.0.1:58374, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,41151,1690009861150 2023-07-22 07:11:03,526 DEBUG [Listener at localhost/44075-EventThread] zookeeper.ZKWatcher(600): regionserver:41151-0x1018bde7794000b, quorum=127.0.0.1:58374, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,41151,1690009861150 2023-07-22 07:11:03,526 DEBUG [Listener at localhost/44075-EventThread] zookeeper.ZKWatcher(600): regionserver:41151-0x1018bde7794000b, quorum=127.0.0.1:58374, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-22 07:11:03,526 DEBUG [Listener at localhost/44075-EventThread] zookeeper.ZKWatcher(600): regionserver:38441-0x1018bde77940001, quorum=127.0.0.1:58374, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,41151,1690009861150 2023-07-22 07:11:03,526 DEBUG [Listener at localhost/44075-EventThread] zookeeper.ZKWatcher(600): regionserver:44257-0x1018bde77940003, quorum=127.0.0.1:58374, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-22 07:11:03,526 DEBUG [Listener at localhost/44075-EventThread] zookeeper.ZKWatcher(600): regionserver:44257-0x1018bde77940003, quorum=127.0.0.1:58374, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,34227,1690009859598 2023-07-22 07:11:03,526 DEBUG [Listener at localhost/44075-EventThread] zookeeper.ZKWatcher(600): master:39207-0x1018bde77940000, quorum=127.0.0.1:58374, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-22 07:11:03,526 DEBUG [Listener at localhost/44075-EventThread] zookeeper.ZKWatcher(600): regionserver:34227-0x1018bde77940002, quorum=127.0.0.1:58374, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-22 07:11:03,527 DEBUG [Listener at localhost/44075-EventThread] zookeeper.ZKWatcher(600): regionserver:41151-0x1018bde7794000b, quorum=127.0.0.1:58374, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,34227,1690009859598 2023-07-22 07:11:03,526 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for ab80d17333be45f2a9b2ada1805bd2bd 2023-07-22 07:11:03,526 DEBUG [Listener at localhost/44075-EventThread] zookeeper.ZKWatcher(600): regionserver:38441-0x1018bde77940001, quorum=127.0.0.1:58374, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-22 07:11:03,527 DEBUG [Listener at localhost/44075-EventThread] zookeeper.ZKWatcher(600): regionserver:38441-0x1018bde77940001, quorum=127.0.0.1:58374, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,34227,1690009859598 2023-07-22 07:11:03,527 DEBUG [Listener at localhost/44075-EventThread] zookeeper.ZKWatcher(600): regionserver:34227-0x1018bde77940002, quorum=127.0.0.1:58374, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,34227,1690009859598 2023-07-22 07:11:03,530 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:45267/user/jenkins/test-data/bf469a2d-0740-951c-0d71-e8a7443854ab/data/hbase/namespace/7d65c4912cc14781701102c94fe87b18/recovered.edits/12.seqid, newMaxSeqId=12, maxSeqId=1 2023-07-22 07:11:03,532 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:namespace,,1690009860651.7d65c4912cc14781701102c94fe87b18. 2023-07-22 07:11:03,532 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 7d65c4912cc14781701102c94fe87b18: 2023-07-22 07:11:03,532 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:namespace,,1690009860651.7d65c4912cc14781701102c94fe87b18. 2023-07-22 07:11:03,544 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=82 B at sequenceid=26 (bloomFilter=false), to=hdfs://localhost:45267/user/jenkins/test-data/bf469a2d-0740-951c-0d71-e8a7443854ab/data/hbase/meta/1588230740/.tmp/rep_barrier/ed6f885f4b2742d7bff4f833ae62d3b5 2023-07-22 07:11:03,551 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for ed6f885f4b2742d7bff4f833ae62d3b5 2023-07-22 07:11:03,568 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=428 B at sequenceid=26 (bloomFilter=false), to=hdfs://localhost:45267/user/jenkins/test-data/bf469a2d-0740-951c-0d71-e8a7443854ab/data/hbase/meta/1588230740/.tmp/table/532748a31f764fb287366dc9df481042 2023-07-22 07:11:03,575 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 532748a31f764fb287366dc9df481042 2023-07-22 07:11:03,576 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:45267/user/jenkins/test-data/bf469a2d-0740-951c-0d71-e8a7443854ab/data/hbase/meta/1588230740/.tmp/info/ab80d17333be45f2a9b2ada1805bd2bd as hdfs://localhost:45267/user/jenkins/test-data/bf469a2d-0740-951c-0d71-e8a7443854ab/data/hbase/meta/1588230740/info/ab80d17333be45f2a9b2ada1805bd2bd 2023-07-22 07:11:03,582 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for ab80d17333be45f2a9b2ada1805bd2bd 2023-07-22 07:11:03,582 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:45267/user/jenkins/test-data/bf469a2d-0740-951c-0d71-e8a7443854ab/data/hbase/meta/1588230740/info/ab80d17333be45f2a9b2ada1805bd2bd, entries=22, sequenceid=26, filesize=7.3 K 2023-07-22 07:11:03,585 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:45267/user/jenkins/test-data/bf469a2d-0740-951c-0d71-e8a7443854ab/data/hbase/meta/1588230740/.tmp/rep_barrier/ed6f885f4b2742d7bff4f833ae62d3b5 as hdfs://localhost:45267/user/jenkins/test-data/bf469a2d-0740-951c-0d71-e8a7443854ab/data/hbase/meta/1588230740/rep_barrier/ed6f885f4b2742d7bff4f833ae62d3b5 2023-07-22 07:11:03,596 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for ed6f885f4b2742d7bff4f833ae62d3b5 2023-07-22 07:11:03,597 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:45267/user/jenkins/test-data/bf469a2d-0740-951c-0d71-e8a7443854ab/data/hbase/meta/1588230740/rep_barrier/ed6f885f4b2742d7bff4f833ae62d3b5, entries=1, sequenceid=26, filesize=4.9 K 2023-07-22 07:11:03,597 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:45267/user/jenkins/test-data/bf469a2d-0740-951c-0d71-e8a7443854ab/data/hbase/meta/1588230740/.tmp/table/532748a31f764fb287366dc9df481042 as hdfs://localhost:45267/user/jenkins/test-data/bf469a2d-0740-951c-0d71-e8a7443854ab/data/hbase/meta/1588230740/table/532748a31f764fb287366dc9df481042 2023-07-22 07:11:03,605 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 532748a31f764fb287366dc9df481042 2023-07-22 07:11:03,605 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:45267/user/jenkins/test-data/bf469a2d-0740-951c-0d71-e8a7443854ab/data/hbase/meta/1588230740/table/532748a31f764fb287366dc9df481042, entries=6, sequenceid=26, filesize=5.1 K 2023-07-22 07:11:03,606 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~4.51 KB/4614, heapSize ~8.77 KB/8976, currentSize=0 B/0 for 1588230740 in 182ms, sequenceid=26, compaction requested=false 2023-07-22 07:11:03,617 INFO [RS:0;jenkins-hbase4:38441] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,38441,1690009859417; all regions closed. 2023-07-22 07:11:03,622 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,41151,1690009861150] 2023-07-22 07:11:03,622 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,41151,1690009861150; numProcessing=1 2023-07-22 07:11:03,623 DEBUG [RS:2;jenkins-hbase4:44257] regionserver.HRegionServer(1504): Waiting on 1588230740 2023-07-22 07:11:03,625 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,41151,1690009861150 already deleted, retry=false 2023-07-22 07:11:03,625 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,41151,1690009861150 expired; onlineServers=3 2023-07-22 07:11:03,625 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,34227,1690009859598] 2023-07-22 07:11:03,625 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,34227,1690009859598; numProcessing=2 2023-07-22 07:11:03,628 DEBUG [RS:0;jenkins-hbase4:38441] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/bf469a2d-0740-951c-0d71-e8a7443854ab/oldWALs 2023-07-22 07:11:03,628 INFO [RS:0;jenkins-hbase4:38441] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C38441%2C1690009859417:(num 1690009860384) 2023-07-22 07:11:03,628 DEBUG [RS:0;jenkins-hbase4:38441] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-22 07:11:03,628 INFO [RS:0;jenkins-hbase4:38441] regionserver.LeaseManager(133): Closed leases 2023-07-22 07:11:03,628 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:45267/user/jenkins/test-data/bf469a2d-0740-951c-0d71-e8a7443854ab/data/hbase/meta/1588230740/recovered.edits/29.seqid, newMaxSeqId=29, maxSeqId=1 2023-07-22 07:11:03,629 INFO [RS:0;jenkins-hbase4:38441] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-07-22 07:11:03,629 INFO [RS:0;jenkins-hbase4:38441] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-22 07:11:03,629 INFO [RS:0;jenkins-hbase4:38441] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-22 07:11:03,629 INFO [RS:0;jenkins-hbase4:38441] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-22 07:11:03,629 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-22 07:11:03,630 INFO [RS:0;jenkins-hbase4:38441] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:38441 2023-07-22 07:11:03,631 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-22 07:11:03,631 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-22 07:11:03,631 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-22 07:11:03,631 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:meta,,1.1588230740 2023-07-22 07:11:03,723 DEBUG [Listener at localhost/44075-EventThread] zookeeper.ZKWatcher(600): regionserver:34227-0x1018bde77940002, quorum=127.0.0.1:58374, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-22 07:11:03,723 INFO [RS:1;jenkins-hbase4:34227] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,34227,1690009859598; zookeeper connection closed. 2023-07-22 07:11:03,723 DEBUG [Listener at localhost/44075-EventThread] zookeeper.ZKWatcher(600): regionserver:34227-0x1018bde77940002, quorum=127.0.0.1:58374, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-22 07:11:03,724 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@4b392fe6] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@4b392fe6 2023-07-22 07:11:03,725 DEBUG [Listener at localhost/44075-EventThread] zookeeper.ZKWatcher(600): master:39207-0x1018bde77940000, quorum=127.0.0.1:58374, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-22 07:11:03,725 DEBUG [Listener at localhost/44075-EventThread] zookeeper.ZKWatcher(600): regionserver:44257-0x1018bde77940003, quorum=127.0.0.1:58374, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,38441,1690009859417 2023-07-22 07:11:03,725 DEBUG [Listener at localhost/44075-EventThread] zookeeper.ZKWatcher(600): regionserver:38441-0x1018bde77940001, quorum=127.0.0.1:58374, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,38441,1690009859417 2023-07-22 07:11:03,725 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,34227,1690009859598 already deleted, retry=false 2023-07-22 07:11:03,725 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,34227,1690009859598 expired; onlineServers=2 2023-07-22 07:11:03,824 INFO [RS:2;jenkins-hbase4:44257] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,44257,1690009859763; all regions closed. 2023-07-22 07:11:03,824 DEBUG [Listener at localhost/44075-EventThread] zookeeper.ZKWatcher(600): regionserver:41151-0x1018bde7794000b, quorum=127.0.0.1:58374, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-22 07:11:03,824 INFO [RS:3;jenkins-hbase4:41151] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,41151,1690009861150; zookeeper connection closed. 2023-07-22 07:11:03,824 DEBUG [Listener at localhost/44075-EventThread] zookeeper.ZKWatcher(600): regionserver:41151-0x1018bde7794000b, quorum=127.0.0.1:58374, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-22 07:11:03,824 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@5c5c0722] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@5c5c0722 2023-07-22 07:11:03,825 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,38441,1690009859417] 2023-07-22 07:11:03,825 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,38441,1690009859417; numProcessing=3 2023-07-22 07:11:03,827 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,38441,1690009859417 already deleted, retry=false 2023-07-22 07:11:03,827 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,38441,1690009859417 expired; onlineServers=1 2023-07-22 07:11:03,830 DEBUG [RS:2;jenkins-hbase4:44257] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/bf469a2d-0740-951c-0d71-e8a7443854ab/oldWALs 2023-07-22 07:11:03,831 INFO [RS:2;jenkins-hbase4:44257] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C44257%2C1690009859763.meta:.meta(num 1690009860590) 2023-07-22 07:11:03,836 DEBUG [RS:2;jenkins-hbase4:44257] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/bf469a2d-0740-951c-0d71-e8a7443854ab/oldWALs 2023-07-22 07:11:03,837 INFO [RS:2;jenkins-hbase4:44257] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C44257%2C1690009859763:(num 1690009860381) 2023-07-22 07:11:03,837 DEBUG [RS:2;jenkins-hbase4:44257] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-22 07:11:03,837 INFO [RS:2;jenkins-hbase4:44257] regionserver.LeaseManager(133): Closed leases 2023-07-22 07:11:03,837 INFO [RS:2;jenkins-hbase4:44257] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-07-22 07:11:03,837 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-22 07:11:03,838 INFO [RS:2;jenkins-hbase4:44257] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:44257 2023-07-22 07:11:03,841 DEBUG [Listener at localhost/44075-EventThread] zookeeper.ZKWatcher(600): regionserver:44257-0x1018bde77940003, quorum=127.0.0.1:58374, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,44257,1690009859763 2023-07-22 07:11:03,841 DEBUG [Listener at localhost/44075-EventThread] zookeeper.ZKWatcher(600): master:39207-0x1018bde77940000, quorum=127.0.0.1:58374, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-22 07:11:03,842 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,44257,1690009859763] 2023-07-22 07:11:03,843 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,44257,1690009859763; numProcessing=4 2023-07-22 07:11:03,844 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,44257,1690009859763 already deleted, retry=false 2023-07-22 07:11:03,844 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,44257,1690009859763 expired; onlineServers=0 2023-07-22 07:11:03,844 INFO [RegionServerTracker-0] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,39207,1690009859236' ***** 2023-07-22 07:11:03,844 INFO [RegionServerTracker-0] regionserver.HRegionServer(2311): STOPPED: Cluster shutdown set; onlineServer=0 2023-07-22 07:11:03,844 DEBUG [M:0;jenkins-hbase4:39207] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@2bf8c5a3, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-22 07:11:03,844 INFO [M:0;jenkins-hbase4:39207] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-22 07:11:03,847 DEBUG [Listener at localhost/44075-EventThread] zookeeper.ZKWatcher(600): master:39207-0x1018bde77940000, quorum=127.0.0.1:58374, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/master 2023-07-22 07:11:03,847 DEBUG [Listener at localhost/44075-EventThread] zookeeper.ZKWatcher(600): master:39207-0x1018bde77940000, quorum=127.0.0.1:58374, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-22 07:11:03,847 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:39207-0x1018bde77940000, quorum=127.0.0.1:58374, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-22 07:11:03,847 INFO [M:0;jenkins-hbase4:39207] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@43f53346{master,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/master} 2023-07-22 07:11:03,848 INFO [M:0;jenkins-hbase4:39207] server.AbstractConnector(383): Stopped ServerConnector@371cb2cf{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-22 07:11:03,848 INFO [M:0;jenkins-hbase4:39207] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-22 07:11:03,849 INFO [M:0;jenkins-hbase4:39207] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@4cf54f13{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-22 07:11:03,849 INFO [M:0;jenkins-hbase4:39207] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@6ed2f93{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/5a1f5817-4556-8293-71dc-0238ce857818/hadoop.log.dir/,STOPPED} 2023-07-22 07:11:03,850 INFO [M:0;jenkins-hbase4:39207] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,39207,1690009859236 2023-07-22 07:11:03,850 INFO [M:0;jenkins-hbase4:39207] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,39207,1690009859236; all regions closed. 2023-07-22 07:11:03,850 DEBUG [M:0;jenkins-hbase4:39207] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-22 07:11:03,850 INFO [M:0;jenkins-hbase4:39207] master.HMaster(1491): Stopping master jetty server 2023-07-22 07:11:03,851 INFO [M:0;jenkins-hbase4:39207] server.AbstractConnector(383): Stopped ServerConnector@780b01d8{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-22 07:11:03,851 DEBUG [M:0;jenkins-hbase4:39207] cleaner.LogCleaner(198): Cancelling LogCleaner 2023-07-22 07:11:03,851 WARN [OldWALsCleaner-0] cleaner.LogCleaner(186): Interrupted while cleaning old WALs, will try to clean it next round. Exiting. 2023-07-22 07:11:03,851 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1690009860099] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1690009860099,5,FailOnTimeoutGroup] 2023-07-22 07:11:03,851 DEBUG [M:0;jenkins-hbase4:39207] cleaner.HFileCleaner(317): Stopping file delete threads 2023-07-22 07:11:03,851 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1690009860100] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1690009860100,5,FailOnTimeoutGroup] 2023-07-22 07:11:03,852 INFO [M:0;jenkins-hbase4:39207] master.MasterMobCompactionThread(168): Waiting for Mob Compaction Thread to finish... 2023-07-22 07:11:03,852 INFO [M:0;jenkins-hbase4:39207] master.MasterMobCompactionThread(168): Waiting for Region Server Mob Compaction Thread to finish... 2023-07-22 07:11:03,852 INFO [M:0;jenkins-hbase4:39207] hbase.ChoreService(369): Chore service for: master/jenkins-hbase4:0 had [] on shutdown 2023-07-22 07:11:03,852 DEBUG [M:0;jenkins-hbase4:39207] master.HMaster(1512): Stopping service threads 2023-07-22 07:11:03,852 INFO [M:0;jenkins-hbase4:39207] procedure2.RemoteProcedureDispatcher(119): Stopping procedure remote dispatcher 2023-07-22 07:11:03,852 ERROR [M:0;jenkins-hbase4:39207] procedure2.ProcedureExecutor(653): ThreadGroup java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] contains running threads; null: See STDOUT java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] 2023-07-22 07:11:03,852 INFO [M:0;jenkins-hbase4:39207] region.RegionProcedureStore(113): Stopping the Region Procedure Store, isAbort=false 2023-07-22 07:11:03,852 DEBUG [normalizer-worker-0] normalizer.RegionNormalizerWorker(174): interrupt detected. terminating. 2023-07-22 07:11:03,852 DEBUG [M:0;jenkins-hbase4:39207] zookeeper.ZKUtil(398): master:39207-0x1018bde77940000, quorum=127.0.0.1:58374, baseZNode=/hbase Unable to get data of znode /hbase/master because node does not exist (not an error) 2023-07-22 07:11:03,853 WARN [M:0;jenkins-hbase4:39207] master.ActiveMasterManager(326): Failed get of master address: java.io.IOException: Can't get master address from ZooKeeper; znode data == null 2023-07-22 07:11:03,853 INFO [M:0;jenkins-hbase4:39207] assignment.AssignmentManager(315): Stopping assignment manager 2023-07-22 07:11:03,853 INFO [M:0;jenkins-hbase4:39207] region.MasterRegion(167): Closing local region {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, isAbort=false 2023-07-22 07:11:03,853 DEBUG [M:0;jenkins-hbase4:39207] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-07-22 07:11:03,853 INFO [M:0;jenkins-hbase4:39207] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-22 07:11:03,853 DEBUG [M:0;jenkins-hbase4:39207] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-22 07:11:03,853 DEBUG [M:0;jenkins-hbase4:39207] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-07-22 07:11:03,853 DEBUG [M:0;jenkins-hbase4:39207] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-22 07:11:03,853 INFO [M:0;jenkins-hbase4:39207] regionserver.HRegion(2745): Flushing 1595e783b53d99cd5eef43b6debb2682 1/1 column families, dataSize=76.21 KB heapSize=90.66 KB 2023-07-22 07:11:03,864 INFO [M:0;jenkins-hbase4:39207] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=76.21 KB at sequenceid=175 (bloomFilter=true), to=hdfs://localhost:45267/user/jenkins/test-data/bf469a2d-0740-951c-0d71-e8a7443854ab/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/bf5934ab87a7445dafb9a9ab573d26fd 2023-07-22 07:11:03,870 DEBUG [M:0;jenkins-hbase4:39207] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:45267/user/jenkins/test-data/bf469a2d-0740-951c-0d71-e8a7443854ab/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/bf5934ab87a7445dafb9a9ab573d26fd as hdfs://localhost:45267/user/jenkins/test-data/bf469a2d-0740-951c-0d71-e8a7443854ab/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/bf5934ab87a7445dafb9a9ab573d26fd 2023-07-22 07:11:03,876 INFO [M:0;jenkins-hbase4:39207] regionserver.HStore(1080): Added hdfs://localhost:45267/user/jenkins/test-data/bf469a2d-0740-951c-0d71-e8a7443854ab/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/bf5934ab87a7445dafb9a9ab573d26fd, entries=22, sequenceid=175, filesize=11.1 K 2023-07-22 07:11:03,877 INFO [M:0;jenkins-hbase4:39207] regionserver.HRegion(2948): Finished flush of dataSize ~76.21 KB/78041, heapSize ~90.64 KB/92816, currentSize=0 B/0 for 1595e783b53d99cd5eef43b6debb2682 in 24ms, sequenceid=175, compaction requested=false 2023-07-22 07:11:03,879 INFO [M:0;jenkins-hbase4:39207] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-22 07:11:03,879 DEBUG [M:0;jenkins-hbase4:39207] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-22 07:11:03,882 INFO [master:store-WAL-Roller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-22 07:11:03,882 INFO [M:0;jenkins-hbase4:39207] flush.MasterFlushTableProcedureManager(83): stop: server shutting down. 2023-07-22 07:11:03,883 INFO [M:0;jenkins-hbase4:39207] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:39207 2023-07-22 07:11:03,886 DEBUG [M:0;jenkins-hbase4:39207] zookeeper.RecoverableZooKeeper(172): Node /hbase/rs/jenkins-hbase4.apache.org,39207,1690009859236 already deleted, retry=false 2023-07-22 07:11:03,925 DEBUG [Listener at localhost/44075-EventThread] zookeeper.ZKWatcher(600): regionserver:38441-0x1018bde77940001, quorum=127.0.0.1:58374, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-22 07:11:03,925 INFO [RS:0;jenkins-hbase4:38441] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,38441,1690009859417; zookeeper connection closed. 2023-07-22 07:11:03,925 DEBUG [Listener at localhost/44075-EventThread] zookeeper.ZKWatcher(600): regionserver:38441-0x1018bde77940001, quorum=127.0.0.1:58374, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-22 07:11:03,925 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@3417e9a3] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@3417e9a3 2023-07-22 07:11:04,025 DEBUG [Listener at localhost/44075-EventThread] zookeeper.ZKWatcher(600): master:39207-0x1018bde77940000, quorum=127.0.0.1:58374, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-22 07:11:04,025 INFO [M:0;jenkins-hbase4:39207] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,39207,1690009859236; zookeeper connection closed. 2023-07-22 07:11:04,025 DEBUG [Listener at localhost/44075-EventThread] zookeeper.ZKWatcher(600): master:39207-0x1018bde77940000, quorum=127.0.0.1:58374, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-22 07:11:04,126 DEBUG [Listener at localhost/44075-EventThread] zookeeper.ZKWatcher(600): regionserver:44257-0x1018bde77940003, quorum=127.0.0.1:58374, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-22 07:11:04,126 DEBUG [Listener at localhost/44075-EventThread] zookeeper.ZKWatcher(600): regionserver:44257-0x1018bde77940003, quorum=127.0.0.1:58374, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-22 07:11:04,126 INFO [RS:2;jenkins-hbase4:44257] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,44257,1690009859763; zookeeper connection closed. 2023-07-22 07:11:04,126 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@5116b17c] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@5116b17c 2023-07-22 07:11:04,126 INFO [Listener at localhost/44075] util.JVMClusterUtil(335): Shutdown of 1 master(s) and 4 regionserver(s) complete 2023-07-22 07:11:04,126 WARN [Listener at localhost/44075] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-22 07:11:04,130 INFO [Listener at localhost/44075] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-22 07:11:04,233 WARN [BP-294161273-172.31.14.131-1690009858285 heartbeating to localhost/127.0.0.1:45267] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-22 07:11:04,233 WARN [BP-294161273-172.31.14.131-1690009858285 heartbeating to localhost/127.0.0.1:45267] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-294161273-172.31.14.131-1690009858285 (Datanode Uuid b99d1d45-9a0a-45e7-baea-14c6c4bcb0b9) service to localhost/127.0.0.1:45267 2023-07-22 07:11:04,234 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/5a1f5817-4556-8293-71dc-0238ce857818/cluster_f0188540-95af-947d-8415-f9a1cf2f35cc/dfs/data/data5/current/BP-294161273-172.31.14.131-1690009858285] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-22 07:11:04,234 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/5a1f5817-4556-8293-71dc-0238ce857818/cluster_f0188540-95af-947d-8415-f9a1cf2f35cc/dfs/data/data6/current/BP-294161273-172.31.14.131-1690009858285] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-22 07:11:04,235 WARN [Listener at localhost/44075] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-22 07:11:04,240 INFO [Listener at localhost/44075] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-22 07:11:04,343 WARN [BP-294161273-172.31.14.131-1690009858285 heartbeating to localhost/127.0.0.1:45267] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-22 07:11:04,343 WARN [BP-294161273-172.31.14.131-1690009858285 heartbeating to localhost/127.0.0.1:45267] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-294161273-172.31.14.131-1690009858285 (Datanode Uuid 0e8bc9f2-11d4-4158-906f-17e1bdd8ba6b) service to localhost/127.0.0.1:45267 2023-07-22 07:11:04,344 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/5a1f5817-4556-8293-71dc-0238ce857818/cluster_f0188540-95af-947d-8415-f9a1cf2f35cc/dfs/data/data3/current/BP-294161273-172.31.14.131-1690009858285] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-22 07:11:04,344 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/5a1f5817-4556-8293-71dc-0238ce857818/cluster_f0188540-95af-947d-8415-f9a1cf2f35cc/dfs/data/data4/current/BP-294161273-172.31.14.131-1690009858285] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-22 07:11:04,345 WARN [Listener at localhost/44075] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-22 07:11:04,348 INFO [Listener at localhost/44075] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-22 07:11:04,452 WARN [BP-294161273-172.31.14.131-1690009858285 heartbeating to localhost/127.0.0.1:45267] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-22 07:11:04,452 WARN [BP-294161273-172.31.14.131-1690009858285 heartbeating to localhost/127.0.0.1:45267] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-294161273-172.31.14.131-1690009858285 (Datanode Uuid f1f095cc-1ee1-449e-9c51-f00bd4b30c9d) service to localhost/127.0.0.1:45267 2023-07-22 07:11:04,452 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/5a1f5817-4556-8293-71dc-0238ce857818/cluster_f0188540-95af-947d-8415-f9a1cf2f35cc/dfs/data/data1/current/BP-294161273-172.31.14.131-1690009858285] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-22 07:11:04,453 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/5a1f5817-4556-8293-71dc-0238ce857818/cluster_f0188540-95af-947d-8415-f9a1cf2f35cc/dfs/data/data2/current/BP-294161273-172.31.14.131-1690009858285] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-22 07:11:04,462 INFO [Listener at localhost/44075] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-22 07:11:04,576 INFO [Listener at localhost/44075] zookeeper.MiniZooKeeperCluster(344): Shutdown MiniZK cluster with all ZK servers 2023-07-22 07:11:04,602 INFO [Listener at localhost/44075] hbase.HBaseTestingUtility(1293): Minicluster is down